Last year I purchased an e-bike upgrade kit for my mother in law. We decided to install it on a bicycle she originally bought back in the 80s, which I fixed and refurbished a couple of years ago and used until September 2022 when I bought myself a Dutch Cortina U4.
When I used this bicycle, I installed a lightweight Shutter Precision dynamo hub and compatible lights, XLC at the front, B chel at the back. Unfortunately, since Swytch is a front wheel with a built-in electric motor, these lights don t have a dynamo to connect to anymore, and Swytch doesn t have a dedicated connector for lights. I tried asking the manufacturer for more documentation or schematics, but they refused to do so.
Luckily, a Canadian member of the Pedelecs forum managed to reverse-engineer the Swytch connector pinouts, which gave me an idea on how to proceed. Unfortunately, that meant that I had to replace both lights, and by trial and error I found specific models that worked. Before the Axa lights, I also tried B chel s Tivoli e-bike light, but it didn t work because the voltage was too low:
Once I knew what to do, the rest was super easy
So, here we go:
Get a 3-pin (yellow) male connector with a cable, e.g. off AliExpress. Only two wires will be used, the white one is +4.2V (4-point-2, not 42!), the black one is earth. This will go into the throttle port. If you actually have a throttle, you need some sort of Y-splitter, but I don t, so this was not an issue for me. (However, I bought both sides (M and F), just to be sure.)
Purchase an Axa 606 E6-48 front light. The 606 comes in two versions, for dynamos and e-bikes, use the one for e-bikes; despite being officially rated as 6 48V, these lights work quite well off 4.2V too.
Purchase an Axa Spark Steady rear light. This light works with both AC and DC (just like the 606, the official rating is 6 48V), and works off 4.2V without an issue.
Wire lights up. I used tiny wire terminals to join the wires, but I m sure there are better options too. Insulate them well, make sure the red wire from the throttle connector is insulated too. I used a bunch of shrink tubes and black insulation tape. Since the voltage is not wildly different from what the dynamo hub produced (although AC, not DC), I was able to reuse the cable I had already routed to the rear carrier.
Lights go on automatically as soon as you touch the power button on the battery pack, and stay on until the battery pack is switched off completely. I was considering adding a handlebar switch, but since I lost the only one I had, I had to do without.
The side effect of using the Axa Spark at the rear is that it has a capacitor inside and keeps going for a couple more minutes after the battery pack is off I haven t decided whether that s a benefit or a drawback
The Last Hero is the 27th Discworld novel and part of the Rincewind
subseries. This is something of a sequel to Interesting Times and relies heavily on the cast that was built
up in previous books. It's not a good place to start with the series.
At last, the rare Rincewind novel that I enjoyed. It helps that Rincewind
is mostly along for the ride.
Cohen the Barbarian and his band of elderly heroes have decided they're
tired of enjoying their spoils and are going on a final adventure.
They're going to return fire to the gods, in the form of a giant bomb.
The wizards in Ankh-Morpork get wind of this and realize that an explosion
at the Hub where the gods live could disrupt the magical field of the
entire Disc, effectively destroying it. The only hope seems to be to
reach Cori Celesti before Cohen and head him off, but Cohen is already
almost there. Enter Lord Vetinari, who has Leonard of Quirm design a
machine that will get them there in time by slingshotting under the Disc
itself.
First off, let me say how much I love the idea of returning fire to the
gods with interest. I kind of wish Pratchett had done more with their
motivations, but I was laughing about that through the whole book.
Second, this is the first of the illustrated Discworld books that I've
read in the intended illustrated form (I read the paperback version of
Eric), and this book is gorgeous. I
enjoyed Paul Kidby's art far more than I had expected to. His style what
I will call, for lack of better terminology due to my woeful art
education, "highly detailed caricature." That's not normally a style that
clicks with me, but it works incredibly well for Discworld.
The Last Hero is richly illustrated, with some amount of art, if
only subtle background behind the text, on nearly every page. There are
several two-page spreads, but oddly I thought those (including the parody
of The Scream on the cover) were the worst art of the book. None
of them did much for me. The best art is in the figure studies and subtle
details: Leonard of Quirm's beautiful calligraphy, his numerous sketches,
the labeled illustration of the controls of the flying machine, and the
portraits of Cohen's band and the people they encounter. The edition I
got is printed on lovely, thick glossy paper, and the subtle art texture
behind the writing makes this book a delight to read. I'm not sure if,
like Eric, this book comes in other editions, but if so, I highly
recommend getting or finding the high-quality illustrated edition for the
best reading experience.
The plot, like a lot of the Rincewind books, doesn't amount to much, but I
enjoyed the mission to intercept Cohen. Leonard of Quirm is a great
character, and the slow revelation of his flying machine design (which I
will not spoil) is a delightful combination of Leonardo da Vinci parody,
Discworld craziness, and NASA homage. Also, the Librarian is involved,
which always improves a Discworld book. (The Luggage, sadly, is not; I
would have liked to have seen a richly-illustrated story about it, but it
looks like I'll have to find the illustrated version of Eric for
that.)
There is one of Pratchett's philosophical subtexts here, about heroes and
stories and what it means for your story to live on. To be honest, it
didn't grab me; it's mostly subtext, and this particular set of characters
weren't quite introspective enough to make the philosophy central to the
story. Also, I was perhaps too sympathetic to Cohen's goals, and thus not
very interested in anyone successfully stopping him. But I had a lot more
fun with this one than I usually do with Rincewind books, helped
considerably by the illustrations. If you've been skipping Rincewind
books in your Discworld read-through and have access to the illustrated
edition of The Last Hero, consider making an exception for this
one.
Followed by The Amazing Maurice and His Educated Rodents in
publication order and, thematically, by Unseen Academicals.
Rating: 7 out of 10
I recently mentioned on the internet I did work in this direction and a friend of mine asked me to write a blogpost on this. I didn t blog for a long time (keeping all the goodness for myself hehe), so here we go. To set the scene, let s assume we want to make an exectuable binary for x86_64 Linux that s supposed to be extremely portable. It should work on both Debian and Arch Linux. It should work on systems without glibc like Alpine Linux. It should even work in a FROM scratch Docker container. In a more serious setting you would statically link musl-libc with your Rust program, but today we re in a silly-goofy mood so we re going to try to make this work without a libc. And we re also going to use Rust for this, more specifically the stable release channel of Rust, so this blog post won t use any nightly-only features that might still change/break. If you re using a Rust 1.0 version that was recent at the time of writing or later (>= 1.68.0 according to my computer), you should be able to try this at home just fine .
This tutorial assumes you have no prior programming experience in any programming language, but it s going to involve some x86_64 assembly. If you already know what a syscall is, you ll be just fine. If this is your first exposure to programming you might still be able to follow along, but it might be a wild ride.
If you haven t already, install rustup (possibly also available in your package manager, who knows?)
# when asked, press enter to confirm default settings
curl --proto'=https'--tlsv1.2 -sSf https://sh.rustup.rs sh
This is going to install everything you need to use Rust on Linux (this tutorial assumes you re following along on Linux btw). Usually it s still using a system linker (by calling the cc binary, and errors out if none is present), but instead we re going to use rustup to install an additional target:
rustup target add x86_64-unknown-none
I don t know if/how this is made available by Linux distributions, so I recommend following along with rust installed from rustup.
Anyway, we re creating a new project with cargo, this creates a new directory that we can then change into (you might ve done this before):
cargo new hack-the-planet
cd hack-the-planet
There s going to be a file named Cargo.toml, we don t need to make any changes there, but the one that was auto-generated for me at the time of writing looks like this:
[package]name="hack-the-planet"version="0.1.0"edition="2021"# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html[dependencies]
There s a second file named src/main.rs, it s going to contain some pre-generated hello world, but we re going to delete it and create a new, empty file:
rm src/main.rs
touch src/main.rs
Alrighty, leaving this file empty is not valid but we re going to walk through the individual steps so we re going to try to build with an empty file first. At this point I would like to credit this chapter of a fasterthanli.me series and a blogpost by Philipp Oppermann, this tutorial is merely an 2023 update and makes it work with stable Rust. Let s run the build:
$ cargo build --release --target x86_64-unknown-none
Compiling hack-the-planet v0.1.0 (/hack-the-planet)
error[E0463]: can't find crate for std
= note: the x86_64-unknown-none target may not support the standard library
= note: std is required by hack_the_planet because it does not declare #![no_std]
error[E0601]: main function not found in crate hack_the_planet
= note: consider adding a main function to src/main.rs
Some errors have detailed explanations: E0463, E0601.
For more information about an error, try rustc --explain E0463 .
error: could not compile hack-the-planet due to 2 previous errors
Since this doesn t use a libc (oh right, I forgot to mention this up to this point actually), this also means there s no std standard library. Usually the standard library of Rust still uses the system libc to do syscalls, but since we specify our libc as none this means std won t be available (use std::fs::rename won t work). There are still other functions we can use and import, for example there s core that s effectively a second standard library, but much smaller.
To opt-out of the std standard library, we can put #![no_std] into src/main.rs:
Rust noticed we didn t define a main function and suggest we add one. This isn t what we want though so we ll politely decline and inform Rust we don t have a main and it shouldn t attempt to call it. We re adding #![no_main] to our file and src/main.rs now looks like this:
#![no_std]#![no_main]
Running the build again:
$ cargo build
Compiling hack-the-planet v0.1.0 (/hack-the-planet)
error: #[panic_handler] function required, but not found
error: language item required, but not found: eh_personality
= note: this can occur when a binary crate with #![no_std] is compiled for a target where eh_personality is defined in the standard library
= help: you may be able to compile for a target that doesn't need eh_personality , specify a target with --target or in .cargo/config
error: could not compile hack-the-planet due to 2 previous errors
Rust is asking us for a panic handler, basically I m going to jump to this address if something goes terribly wrong and execute whatever you put there . Eventually we would put some code there to just exit the program, but for now an infinitely loop will do. This is likely going to get stripped away anyway by the compiler if it notices our program has no code-branches leading to a panic and the code is unused. Our src/main.rs now looks like this:
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet
target/x86_64-unknown-none/release/hack-the-planet: file format elf64-x86-64
Ok that looks pretty from scratch to me . The file contains no cpu instructions. Also note how our infinity loop is not present (as predicted).
Making a basic program and executing it
Ok let s try to make a valid program that basically just cleanly exits. First let s try to add some cpu instructions and verify they re indeed getting executed. Lemme introduce, the INT 3 instruction in x86_64 assembly. In binary it s also known as the 0xCC opcode. It crashes our program in a slightly different way, so if the error message changes, we know it worked. The other tutorials use a #[naked] function for the entry point, but since this feature isn t stabilized at the time of writing we re going to use the global_asm! macro. Also don t worry, I m not going to introduce every assembly instruction individually. Our program now looks like this:
The error message of the crash is now slightly different because it s hitting our breakpoint cpu instruction. Funfact btw, if you run this in strace you can see this isn t making any system calls (aka not talking to the kernel at all, it just crashes):
Let s try to make a program that does a clean shutdown. To do this we inform the kernel with a system call that we may like to exit. We can get more info on this with man 2 exit and it defines exit like this:
[[noreturn]] void _exit(int status);
On Linux this syscall is actually called _exit and exit is implemented as a libc function, but we don t care about any of that today, it s going to do the job just fine. Also note how it takes a single argument of type int. In C-speak this means signed 32 bit , i32 in Rust.
Next we need to figure out the syscall number of this syscall. These numbers are cpu architecture specific for some reason (idk, idc). We re looking these numbers up with ripgrep in /usr/include/asm/:
Since we re on x86_64 the correct value is the one in unistd_64.h, 60. Also, on x86_64 the syscall number goes into the rax cpu register, the status argument goes in the rdi register. The return value of the syscall is going to be placed in the rax register after the syscall is done, but for exit the execution is never given back to us. Let s try to write 60 into the rax register and 69 into the rdi register. To copy into registers we re going to use the mov destination, source instruction to copy from source to destination. With these registers setup we can use the syscall cpu instruction to hand execution over to the kernel. Don t worry, there s only one more assembly instruction coming and for everything else we re going to use Rust.
Our code now looks like this:
Writing Rust
Ok but even though cpu instructions can be fun at times, I d rather not deal with them most of the time (this might strike you as odd, considering this blog post). Instead let s try to define a function in Rust and call into that instead. We re going to define this function as unsafe (btw none of this is taking advantage of the safety guarantees by Rust in case it wasn t obvious. This tutorial is mostly going to stick to unsafe Rust, but for bigger projects you can attempt to reduce your usage of unsafe to opt back into normal safe Rust), it also declares the function with #[no_mangle] so the function name is preserved as main and we can call it from our global_asm entry point. Lastely, when our program is started it s going to get the stack address passed in one of the cpu registers, this value is expected to be passed to our function as an argument. Our function declares ! as return type, which means it never returns:
#[no_mangle]unsafefnmain(_stack_top:*constu8)->!// TODO: this is missing
This won t compile yet, we need to add our assembly for the exit syscall back in.
This time we re using the asm! macro, this is a slightly more declarative approach. We want to run the syscall cpu instruction with 60 in the rax register, and this time we want the rdi register to be zero, to indicate a successful exit. We also use options(noreturn) so Rust knows it should assume execution does not resume after this assembly is executed (the Linux kernel guarantees this). We modify our global_asm! entrypoint to call our new main function, and to copy the stack address from rsp into the register for the first argument rdi because it would otherwise get lost forever:
After building and disassembling this the Rust compiler is slowly starting to do work for us:
$ cargo build --release --target x86_64-unknown-none
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet
target/x86_64-unknown-none/release/hack-the-planet: file format elf64-x86-64
Disassembly of section .text:
0000000000001210 <_start>:
1210: 48 89 e7 mov %rsp,%rdi
1213: e8 08 00 00 00 call 1220 <main>
1218: cc int3
1219: cc int3
121a: cc int3
121b: cc int3
121c: cc int3
121d: cc int3
121e: cc int3
121f: cc int3
0000000000001220 <main>:
1220: 50 push %rax
1221: b8 3c 00 00 00 mov $0x3c,%eax
1226: 31 ff xor %edi,%edi
1228: 0f 05 syscall
122a: 0f 0b ud2
The mov and syscall instructions are still the same, but it noticed it can XOR the rdi register with itself to set it to zero. It s using x86 assembly language (the 32 bit variant of x86_64, that also happens to work on x86_64) to do so, that s why the register is refered to as edi in the disassembly. You can also see it s inserting a bunch of 0xCC instructions (for alignment) and Rust puts the opcodes 0x0F 0x0B at the end of the function to force an invalid opcode exception so the program is guaranteed to crash in case the exit syscall doesn t do it.
This code still executes as expected:
Adding functions
Ok we re getting closer but we aren t quite there yet. Let s try to write an exit function for our assembly that we can then call like a normal function. Remember that it takes a signed 32 bit integer that s supposed to go into rdi.
Actually, since this function doesn t take any raw pointers and any i32 is valid for this syscall we re going to remove the unsafe marker of this function. When doing this we still need to use unsafe within the function for our inline assembly.
Running this still works, but interestingly the generated assembly didn t change at all:
$ cargo build --release --target x86_64-unknown-none
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet
target/x86_64-unknown-none/release/hack-the-planet: file format elf64-x86-64
Disassembly of section .text:
0000000000001210 <_start>:
1210: 48 89 e7 mov %rsp,%rdi
1213: e8 08 00 00 00 call 1220 <main>
1218: cc int3
1219: cc int3
121a: cc int3
121b: cc int3
121c: cc int3
121d: cc int3
121e: cc int3
121f: cc int3
0000000000001220 <main>:
1220: 50 push %rax
1221: b8 3c 00 00 00 mov $0x3c,%eax
1226: 31 ff xor %edi,%edi
1228: 0f 05 syscall
122a: 0f 0b ud2
Rust noticed there s no need to make it a separate function at runtime and instead merged the instructions of the exit function directly into our main. It also noticed the 0 argument in exit(0) means rdi is supposed to be zero and uses the XOR optimization mentioned before.
Since main is not calling any unsafe functions anymore we could mark it as safe too, but in the next few functions we re going to deal with file descriptors and raw pointers, so this is likely the only safe function we re going to write in this tutorial so let s just keep the unsafe marker.
Printing text
Ok let s try to do a quick hello world, to do this we re going to call the write syscall. Looking it up with man 2 write:
The write syscall takes 3 arguments and returns a signed size_t. In Rust this is called isize. In C size_t is an unsigned integer type that can hold any value of sizeof(...) for the given platform, ssize_t can only store half of that because it uses one of the bits to indicate an error has occured (the first s means signed, write returns -1 in case of an error).
The arguments for write are:
the file descriptor to write to. stdout is located on file descriptor 1.
a pointer/address to some memory.
the number of bytes that should be written, starting at the given address.
Now that s a lot of stuff at once. Since this syscall is actually going to hand execution back to our program we need to let Rust know which cpu registers the syscall is writing to, so Rust doesn t attempt to use them to store data (that would be silently overwritten by the syscall). inlateout("raw") 1 => r0 means we re writing a value to the register and want the result back in variable r0. in("rdi") fd means we want to write the value of fd into the rdi register. lateout("rcx") _ means the Linux kernel may write to that register (so the previous value may get lost), but we don t want to store the value anywhere (the underscore acts as a dummy variable name).
This doesn t compile just yet though
Rust has inferred the type of r0 is isize since that s what our function returns, but the type of the input value for the register was inferred to be i32. We re going to select a specific number type to fix this.
We need to set the number of bytes we want to write explicitly because there s no concept of null-byte termination in the write system call, it s quite literally write the next X bytes, starting from this address . Our program now looks like this:
This time there are 2 syscalls, first write, then exit. For write it s setting up the 3 arguments in our cpu registers (rdi, rsi, rdx). The lea instruction subtracts 0x102b from the rip register (the instruction pointer) and places the result in the rsi register. This is effectively saying an address relative to wherever this code was loaded into memory . The instruction pointer is going to point directly behind the opcodes of the lea instruction, so 0x1238 - 0x102b = 0x20d. This address is also pointed out in the disassembly as a comment.
We don t see the string in our disassembly but we can convert our 0x20d hex to 525 in decimal and use dd to read 12 bytes from that offset, and sure enough:
$ dd bs=1 skip=525 count=12 if=target/x86_64-unknown-none/release/hack-the-planet
Hello world
12+0 records in
12+0 records out
Execute our binary with strace also shows the new write syscall (and the bytes that are being written mixed up in the output).
$ strace -f ./hack-the-planet
execve("./hack-the-planet", ["./hack-the-planet"], 0x74493abe64a8 /* 39 vars */) = 0
write(1, "Hello world\n", 12Hello world
) = 12
exit(0) = ?
+++ exited with 0 +++
After running strip on it to remove some symbols the binary is so small, if you open it in a text editor it fits on a screenshot:
Thief of Time is the 26th Discworld novel and the last Death novel,
although he still appears in subsequent books. It's the third book
starring Susan Sto Helit, so I don't recommend starting here.
Mort is the best starting point for the
Death subseries, and Reaper Man provides
a useful introduction to the villains.
Jeremy Clockson was an orphan raised by the Guild of Clockmakers. He is
very good at making clocks. He's not very good at anything else,
particularly people, but his clocks are the most accurate in Ankh-Morpork.
He is therefore the logical choice to receive a commission by a mysterious
noblewoman who wants him to make the most accurate possible clock: a clock
that can measure the tick of the universe, one that a fairy tale says had
been nearly made before. The commission is followed by a surprise
delivery of an Igor, to help with the clock-making.
People who live in places with lots of fields become farmers. People who
live where there is lots of iron and coal become blacksmiths. And people
who live in the mountains near the Hub, near the gods and full of magic,
become monks. In the highest valley are the History Monks, founded by Wen
the Eternally Surprised. Like most monks, they take apprentices with
certain talents and train them in their discipline. But Lobsang Ludd, an
orphan discovered in the Thieves Guild in Ankh-Morpork, is proving a
challenge. The monks decide to apprentice him to Lu-Tze the sweeper;
perhaps that will solve multiple problems at once.
Since Hogfather, Susan has moved from
being a governess to a schoolteacher. She brings to that job the same
firm patience, total disregard for rules that apply to other people, and
impressive talent for managing children. She is by far the most popular
teacher among the kids, and not only because she transports her class all
over the Disc so that they can see things in person. It is a job that she
likes and understands, and one that she's quite irate to have interrupted
by a summons from her grandfather. But the Auditors are up to something,
and Susan may be able to act in ways that Death cannot.
This was great. Susan has quickly become one of my favorite Discworld
characters, and this time around there is no (or, well, not much)
unbelievable romance or permanently queasy god to distract. The
clock-making portions of the book quickly start to focus on Igor, who is a
delightful perspective through whom to watch events unfold. And the
History Monks! The metaphysics of what they are actually doing (which I
won't spoil, since discovering it slowly is a delight) is perhaps my
favorite bit of Discworld world building to date. I am a sucker for
stories that focus on some process that everyone thinks happens
automatically and investigate the hidden work behind it.
I do want to add a caveat here that the monks are in part a parody of
Himalayan Buddhist monasteries, Lu-Tze is rather obviously a parody of
Laozi and Daoism in general, and Pratchett's parodies of non-western
cultures are rather ham-handed. This is not quite the insulting mess that
the Chinese parody in Interesting Times
was, but it's heavy on the stereotypes. It does not, thankfully,
rely on the stereotypes; the characters are great fun on their own
terms, with the perfect (for me) balance of irreverence and
thoughtfulness. Lu-Tze refusing to be anything other than a sweeper and
being irritatingly casual about all the rules of the order is a classic
bit that Pratchett does very well. But I also have the luxury of ignoring
stereotypes of a culture that isn't mine, and I think Pratchett is on
somewhat thin ice.
As one specific example, having Lu-Tze's treasured sayings be a collection
of banal aphorisms from a random Ankh-Morpork woman is both hilarious and
also arguably rather condescending, and I'm not sure where I landed. It's
a spot-on bit of parody of how a lot of people who get very into "eastern
religions" sound, but it's also equating the Dao De Jing with
advice from the Discworld equivalent of a English housewife. I think the
generous reading is that Lu-Tze made the homilies profound by looking at
them in an entirely different way than the woman saying them, and that's
not completely unlike Daoism and works surprisingly well. But that's
reading somewhat against the grain; Pratchett is clearly making fun of
philosophical koans, and while anything is fair game for some friendly
poking, it still feels a bit weird.
That isn't the part of the History Monks that I loved, though. Their
actual role in the story doesn't come out of the parody. It's something
entirely native to Discworld, and it's an absolute delight. The scene
with Lobsang and the procrastinators is perhaps my favorite Discworld set
piece to date. Everything about the technology of the History Monks, even
the Bond parody, is so good.
I grew up reading the Marvel Comics universe, and Thief of Time
reminds me of a classic John Byrne or Jim Starlin story, where the heroes
are dumped into the middle of vast interdimensional conflicts involving
barely-anthropomorphized cosmic powers and the universe is revealed to
work in ever more intricate ways at vastly expanding scales. The Auditors
are villains in exactly that tradition, and just like the best of those
stories, the fulcrum of the plot is questions about what it means to be
human, what it means to be alive, and the surprising alliances these
non-human powers make with humans or semi-humans. I devoured this kind of
story as a kid, and it turns out I still love it.
The one complaint I have about the plot is that the best part of this book
is the middle, and the end didn't entirely work for me. Ronnie Soak is at
his best as a supporting character about three quarters of the way through
the book, and I found the ending of his subplot much less interesting.
The cosmic confrontation was oddly disappointing, and there's a whole
extended sequence involving chocolate that I think was funnier in
Pratchett's head than it was in mine. The ending isn't bad, but
the middle of this book is my favorite bit of Discworld writing yet, and I
wish the story had carried that momentum through to the end.
I had so much fun with this book. The Discworld novels are clearly
getting better. None of them have yet vaulted into the ranks of my
all-time favorite books there's always some lingering quibble or sagging
bit but it feels like they've gone from reliably good books to more
reliably great books. The acid test is coming, though: the next book is a
Rincewind book, which are usually the weak spots.
Followed by The Last Hero in publication order. There is no direct
thematic sequel.
Rating: 8 out of 10
Akvorado collects sFlow and IPFIX flows, stores them in a
ClickHouse database, and presents them in a web console. Although it lacks
built-in DDoS detection, it s possible to create one by crafting custom
ClickHouse queries.
DDoS detection
Let s assume we want to detect DDoS targeting our customers. As an example, we
consider a DDoS attack as a collection of flows over one minute targeting a
single customer IP address, from a single source port and matching one
of these conditions:
an average bandwidth of 1 Gbps,
an average bandwidth of 200 Mbps when the protocol is UDP,
more than 20 source IP addresses and an average bandwidth of 100 Mbps, or
more than 10 source countries and an average bandwidth of 100 Mbps.
Here is the SQL query to detect such attacks over the last 5 minutes:
DDoS remediation
Once detected, there are at least two ways to stop the attack at the network
level:
blackhole the traffic to the targeted user (RTBH), or
selectively drop packets matching the attack patterns (Flowspec).
Traffic blackhole
The easiest method is to sacrifice the attacked user. While this helps the
attacker, this protects your network. It is a method supported by all routers.
You can also offload this protection to many transit providers. This is useful
if the attack volume exceeds your internet capacity.
This works by advertising with BGP a route to the attacked user with a specific
community. The border router modifies the next hop address of these routes to a
specific IP address configured to forward the traffic to a null interface. RFC 7999 defines 65535:666 for this purpose. This is known as a
remote-triggered blackhole (RTBH) and is explained in more detail in RFC 3882.
It is also possible to blackhole the source of the attacks by leveraging
unicast Reverse Path Forwarding (uRPF) from RFC 3704, as explained in RFC 5635. However, uRPF can be a serious tax on your router resources. See
NCS5500 uRPF: Configuration and Impact on Scale for an example of the kind
of restrictions you have to expect when enabling uRPF.
On the advertising side, we can use BIRD. Here is a complete configuration
file to allow any router to collect them:
log stderr all;
router id 192.0.2.1;protocol device
scan time 10;protocol bgp exporter
ipv4
import none;
export where proto = "blackhole4"; ;
ipv6
import none;
export where proto = "blackhole6"; ;
local as 64666; neighbor range 192.0.2.0/24 external;
multihop;
dynamic name "exporter";
dynamic name digits 2;
graceful restart yes;
graceful restart time 0; long lived graceful restart yes;
long lived stale time 3600;# keep routes for 1 hour!protocol static blackhole4
ipv4;
route 203.0.113.206/32 blackhole bgp_community.add((65535, 666));
; route 203.0.113.68/32 blackhole bgp_community.add((65535, 666));
;protocol static blackhole6
ipv6;
We use BGP long-lived graceful restart to ensure routes are kept for
one hour, even if the BGP connection goes down, notably during maintenance.
On the receiver side, if you have a Cisco router running IOS XR, you can use the
following configuration to blackhole traffic received on the BGP session. As the
BGP session is dedicated to this usage, The community is not used, but you can
also forward these routes to your transit providers.
router static
vrf public
address-family ipv4 unicast
192.0.2.1/32 Null0 description "BGP blackhole" !address-family ipv6 unicast
2001:db8::1/128 Null0 description "BGP blackhole" ! !!route-policy blackhole_ipv4_in_public
if destination in (0.0.0.0/0 le 31) then
dropendifset next-hop 192.0.2.1doneend-policy!route-policy blackhole_ipv6_in_public
if destination in (::/0 le 127) then
dropendifset next-hop 2001:db8::1doneend-policy!router bgp 12322neighbor-group BLACKHOLE_IPV4_PUBLIC
remote-as64666ebgp-multihop255update-source Loopback10
address-family ipv4 unicast
maximum-prefix10090route-policy blackhole_ipv4_in_public in
route-policy drop out
long-lived-graceful-restart stale-time send 86400 accept 86400 !address-family ipv6 unicast
maximum-prefix10090route-policy blackhole_ipv6_in_public in
route-policy drop out
long-lived-graceful-restart stale-time send 86400 accept 86400 ! !vrf public
neighbor192.0.2.1use neighbor-group BLACKHOLE_IPV4_PUBLIC
description akvorado-1
When the traffic is blackholed, it is still reported by IPFIX and sFlow.
In Akvorado, use ForwardingStatus >= 128 as a filter.
While this method is compatible with all routers, it makes the attack successful
as the target is completely unreachable. If your router supports it, Flowspec
can selectively filter flows to stop the attack without impacting the
customer.
FlowspecFlowspec is defined in RFC 8955 and enables the transmission of flow
specifications in BGP sessions. A flow specification is a set of matching
criteria to apply to IP traffic. These criteria include the source and
destination prefix, the IP protocol, the source and destination port, and the
packet length. Each flow specification is associated with an action, encoded as an
extended community: traffic shaping, traffic marking, or redirection.
To announce flow specifications with BIRD, we extend our configuration. The
extended community used shapes the matching traffic to 0 bytes per second.
flow4 table flowtab4;
flow6 table flowtab6;
protocol bgp exporter
flow4
import none;
export where proto = "flowspec4"; ;
flow6
import none;
export where proto = "flowspec6"; ;# [ ]protocol static flowspec4
flow4;
route flow4
dst 203.0.113.68/32;
sport = 53;
length >= 1476 && <= 1500;
proto = 17;
bgp_ext_community.add((generic, 0x80060000, 0x00000000));
;
route flow4
dst 203.0.113.206/32;
sport = 123;
length = 468;
proto = 17;
bgp_ext_community.add((generic, 0x80060000, 0x00000000));
;protocol static flowspec6
flow6;
If you have a Cisco router running IOS XR, the configuration may look like
this:
vrf public
address-family ipv4 flowspec
address-family ipv6 flowspec
!router bgp 12322address-family vpnv4 flowspec
address-family vpnv6 flowspec
neighbor-group FLOWSPEC_IPV4_PUBLIC
remote-as64666ebgp-multihop255update-source Loopback10
address-family ipv4 flowspec
long-lived-graceful-restart stale-time send 86400 accept 86400route-policy accept in
route-policy drop out
maximum-prefix10090validation disable
!address-family ipv6 flowspec
long-lived-graceful-restart stale-time send 86400 accept 86400route-policy accept in
route-policy drop out
maximum-prefix10090validation disable
! !vrf public
address-family ipv4 flowspec
address-family ipv6 flowspec
neighbor192.0.2.1use neighbor-group FLOWSPEC_IPV4_PUBLIC
description akvorado-1
Then, you need to enable Flowspec on all interfaces with:
As with the RTBH setup, you can filter dropped flows with ForwardingStatus >=
128.
DDoS detection (continued)
In the example using Flowspec, the flows were also filtered on the length of the packet:
route flow4
dst 203.0.113.68/32;
sport = 53;
length >= 1476 && <= 1500;
proto = 17;
bgp_ext_community.add((generic, 0x80060000, 0x00000000));
;
This is an important addition: legitimate DNS requests are smaller than this and
therefore not filtered.2 With ClickHouse, you can get the 10th
and 90th percentiles of the packet sizes with quantiles(0.1,
0.9)(Bytes/Packets).
The last issue we need to tackle is how to optimize the request: it may need
several seconds to collect the data and it is likely to consume substantial
resources from your ClickHouse database. One solution is to create a
materialized view to pre-aggregate results:
The ddos_logs table is using the SummingMergeTree engine. When the table
receives new data, ClickHouse replaces all the rows with the same sorting key,
as defined by the ORDER BY directive, with one row which contains summarized
values using either the sum() function or the explicitly specified aggregate
function (uniqCombined and quantiles in our example).3
Finally, we can modify our initial query with the following one:
Gluing everything together
To sum up, building an anti-DDoS system requires to following these steps:
define a set of criteria to detect a DDoS attack,
translate these criteria into SQL requests,
pre-aggregate flows into SummingMergeTree tables,
query and transform the results to a BIRD configuration file, and
configure your routers to pull the routes from BIRD.
A Python script like the following one can handle the fourth step. For each
attacked target, it generates both a Flowspec rule and a blackhole route.
importsocketimporttypesfromclickhouse_driverimportClientasCHClient# Put your SQL query here!SQL_QUERY=" "# How many anti-DDoS rules we want at the same time?MAX_DDOS_RULES=20defempty_ruleset():ruleset=types.SimpleNamespace()ruleset.flowspec=types.SimpleNamespace()ruleset.blackhole=types.SimpleNamespace()ruleset.flowspec.v4=[]ruleset.flowspec.v6=[]ruleset.blackhole.v4=[]ruleset.blackhole.v6=[]returnrulesetcurrent_ruleset=empty_ruleset()client=CHClient(host="clickhouse.akvorado.net")whileTrue:results=client.execute(SQL_QUERY)seen=new_ruleset=empty_ruleset()for(t,addr,proto,port,gbps,mpps,sources,countries,size)inresults:if(addr,proto,port)inseen:continueseen[(addr,proto,port)]=True# Flowspecifaddr.ipv4_mapped:address=addr.ipv4_mappedrules=new_ruleset.flowspec.v4table="flow4"mask=32nh="proto"else:address=addrrules=new_ruleset.flowspec.v6table="flow6"mask=128nh="next header"ifsize[0]==size[1]:length=f"length = int(size[0])"else:length=f"length >= int(size[0]) && <= int(size[1])"header=f"""# Time: t# Source: address, protocol: proto, port: port# Gbps/Mpps: gbps:.3/mpps:.3, packet size: int(size[0])<=X<=int(size[1])# Flows: flows, sources: sources, countries: countries"""rules.append(f"""headerroute table dst address/mask; sport = port;length;nh = socket.getprotobyname(proto); bgp_ext_community.add((generic, 0x80060000, 0x00000000));;""")# Blackholeifaddr.ipv4_mapped:rules=new_ruleset.blackhole.v4else:rules=new_ruleset.blackhole.v6rules.append(f"""headerroute address/mask blackhole bgp_community.add((65535, 666));;""")new_ruleset.flowspec.v4=list(set(new_ruleset.flowspec.v4[:MAX_DDOS_RULES]))new_ruleset.flowspec.v6=list(set(new_ruleset.flowspec.v6[:MAX_DDOS_RULES]))# TODO: advertise changes by mail, chat, ...current_ruleset=new_rulesetchanges=Falseforrules,pathin((current_ruleset.flowspec.v4,"v4-flowspec"),(current_ruleset.flowspec.v6,"v6-flowspec"),(current_ruleset.blackhole.v4,"v4-blackhole"),(current_ruleset.blackhole.v6,"v6-blackhole"),):path=os.path.join("/etc/bird/",f"path.conf")withopen(f"path.tmp","w")asf:forrinrules:f.write(r)changes=(changesornotos.path.exists(path)ornotsamefile(path,f"path.tmp"))os.rename(f"path.tmp",path)ifnotchanges:continueproc=subprocess.Popen(["birdc","configure"],stdin=subprocess.DEVNULL,stdout=subprocess.PIPE,stderr=subprocess.PIPE,)stdout,stderr=proc.communicate(None)stdout=stdout.decode("utf-8","replace")stderr=stderr.decode("utf-8","replace")ifproc.returncode!=0:logger.error(" error:\n\n".format("birdc reconfigure","\n".join([" O: ".format(line)forlineinstdout.rstrip().split("\n")]),"\n".join([" E: ".format(line)forlineinstderr.rstrip().split("\n")]),))
Until Akvorado integrates DDoS detection and mitigation, the ideas presented
in this blog post provide a solid foundation to get started with your own
anti-DDoS system.
ClickHouse can export results using Markdown format when
appending FORMAT Markdown to the query.
While most DNS clients should retry with TCP on failures, this is not
always the case: until recently, musl libc did not implement this.
The materialized view also aggregates the data at hand, both
for efficiency and to ensure we work with the right data types.
It's been a year since I started exploring HLedger, and I'm still
going. The rollover to 2023 was an opportunity to revisit my approach.
Some time ago I stumbled across Dmitry Astapov's HLedger notes (fully-fledged
hledger, which I briefly
mentioned in eventual consistency) and decided to adopt some of its ideas.
new year, new journal
First up, Astapov encourages starting a new journal file for a new calendar
year. I do this for other, accounting-adjacent files as a matter of course,
and I did it for my GNUCash files prior to adopting HLedger. But the reason
for those is a general suspicion that a simple mistake with those softwares
could irrevocably corrupt my data. I'm much more confident with HLedger, so
rolling over at years end isn't necessary for that. But there are other
advantages. A quick obvious one is you can get rid of old accounts (such as
expense accounts tied to a particular project, now completed).
one journal per import
In the first year, I periodically imported account data via CSV exports
of transactions and HLedger's (excellent) CSV import system. I imported
all the transactions, once each, into a single, large journal file.
Astapov instead advocates for creating a separate journal
for each CSV that you wish to import, and keep around the CSV, leaving you
with a 1:1 mapping of CSV:journal. Then use HLedger's "include" mechanism to
pull them all into the main journal.
With the former approach, where the CSV data was imported precisely, once, it
was only exposed to your import rules once. The workflow ended up being:
import transactions; notice some that you could have matched with import rules
and auto-coded; write the rule for the next time. With Astapov's approach, you
can re-generate the journal from the CSV at any point in the future with an
updated set of import rules.
tracking dependencies
Now we get onto the job of driving the generation of all these derivative
journal files. Astapov has built a sophisticated system using Haskell's "Shake",
which I'm not yet familiar, but for my sins I'm quite adept at (GNU-flavoured)
UNIX Make, so I started building with that. An example rule
This captures the dependency between the journal and the underlying CSV
but also to the relevant rules file; if I modify that, and this target
is run in the future, all dependent journals should be re-generated.1
opening balances
It's all fine and well starting over in a new year, and I might be generous
to forgive debts, but I can't count on others to do the same. We need
to carry over some balance information from one year to the next. Astapov has
a more complex (or perhaps featureful) scheme for this involving a custom
Haskell program, but I bodged something with a pair of make targets:
I think this could be golfed into a year-generic rule with a little more work.
The nice thing about this approach is the opening balances for a given year
might change, if adjustments are made in prior years. They shouldn't, for
real accounts, but very well could for more "virtual" liabilities. (including:
deciding to write off debts.)
run lots of reports
Astapov advocates for running lots of reports, and automatically. There's a
really obvious advantage of that to me: there's no chance anyone except me
will actually interact with HLedger itself. For family finances, I need
reports to be able to discuss anything with my wife.
Extending my make rules to run reports is trivial. I've gone for HTML
reports for the most part, as they're the easiest on the eye. Unfortunately
the most useful report to discuss (at least at the moment) would be a list
of transactions in a given expense category, and the register/aregister
commands did not support HTML as an output format. I submitted my first
HLedger patch to add HTML output support to aregister:
https://github.com/simonmichael/hledger/pull/2000
addressing the virtual posting problem
I wrote in my original hledger blog post that I had to resort to
unbalanced virtual postings in order to record both a liability between
my personal cash and family, as well as categorise the spend. I still
haven't found a nice way around that.
But I suspect having broken out the journal into lots of other journals
paves the way to a better solution to the above.
The form of a solution I am thinking of is: some scheme whereby the two
destination accounts are combined together; perhaps, choose one as a primary
and encode the other information in sub-accounts under that. For example,
repeating the example from my hledger blog post:
(I note this is very similar to a solution proposed to me by someone
responding on twitter).
The next step is to recognise that sometimes when looking at the data I
care about one aspect, and at other times the other, but rarely both. So
for the case where I'm thinking about family finances, I could use
account aliases
to effectively flatten out the expense category portion and ignore it.
On the other hand, when I'm concerned about how I've spent my personal
cash and not about how much I owe the family account, I could use
aliases to do the opposite: rewrite-away the family:liabilities:jon
prefix and combine the transactions with the regular jon:expenses
account heirarchy.
(this is all speculative: I need to actually try this.)
catching errors after an import
When I import the transactions for a given real bank account, I check the
final balance against another source: usually a bank statement, to make
sure they agree. I wasn't using any of the myriad methods to make sure
that this remains true later on, and so there was the risk that I make an
edit to something and accidentally remove a transaction that contributed
to that number, and not notice (until the next import).
The CSV data my bank gives me for accounts (not for credit cards) also includes
a 'resulting balance' field. It was therefore trivial to extend the CSV import
rules to add balance
assertions to
the transactions that are generated. This catches the problem.
There are a couple of warts with balance assertions on every such
transaction: for example, dealing with the duplicate transaction for paying
a credit card: one from the bank statement, one from the credit card.
Removing one of the two is sufficient to correct the account balances but
sometimes they don't agree on the transaction date, or the transactions
within a given day are sorted slightly differently by HLedger than by the
bank. The simple solution is to just manually delete one or two assertions:
there remain plenty more for assurance.
going forward
I've only scratched the surface of the suggestions in Astapov's "full fledged
HLedger" notes. I'm up to step 2 of 14. I'm expecting to return to it once
the changes I've made have bedded in a little bit.
I suppose I could anonymize and share the framework (Makefile etc) that I am
using if anyone was interested. It would take some work, though, so I don't know
when I'd get around to it.
the rm latest bit is to clear up some state-tracking files that HLedger writes to avoid importing duplicate transactions. In this case, I know better than HLedger.
Protocol Buffers are a popular choice for serializing structured data
due to their compact size, fast processing speed, language independence, and
compatibility. There exist other alternatives, including Cap n Proto,
CBOR, and Avro.
Usually, data structures are described in a proto definition file
(.proto). The protoc compiler and a language-specific plugin convert it into
code:
Akvorado collects network flows using IPFIX or sFlow, decodes them
with GoFlow2, encodes them to Protocol Buffers, and sends them to
Kafka to be stored in a ClickHouse database. Collecting a new field,
such as source and destination MAC addresses, requires modifications in multiple
places, including the proto definition file and the ClickHouse migration code.
Moreover, the cost is paid by all users.1 It would be nice to have an
application-wide schema and let users enable or disable the fields they
need.
While the main goal is flexibility, we do not want to sacrifice performance. On
this front, this is quite a success: when upgrading from 1.6.4 to 1.7.1, the
decoding and encoding performance almost doubled!
Faster Protocol Buffers encoding
I use the following code to benchmark both the decoding and
encoding process. Initially, the Decode() method is a thin layer above
GoFlow2 producer and stores the decoded data into the in-memory structure
generated by protoc. Later, some of the data will be encoded directly during
flow decoding. This is why we measure both the decoding and the
encoding.2
The canonical Go implementation for Protocol Buffers,
google.golang.org/protobuf is not the most
efficient one. For a long time, people were relying on gogoprotobuf.
However, the project is now deprecated. A good replacement is
vtprotobuf.3
Dynamic Protocol Buffers encoding
We have our baseline. Let s see how to encode our Protocol Buffers without a
.proto file. The wire format is simple and rely a lot on variable-width
integers.
Variable-width integers, or varints, are an efficient way of encoding unsigned
integers using a variable number of bytes, from one to ten, with small values
using fewer bytes. They work by splitting integers into 7-bit payloads and using
the 8th bit as a continuation indicator, set to 1 for all payloads
except the last.
For our usage, we only need two types: variable-width
integers and byte sequences. A byte sequence is encoded by prefixing it by its
length as a varint. When a message is encoded, each key-value pair is turned
into a record consisting of a field number, a wire type, and a payload. The
field number and the wire type are encoded as a single variable-width integer
called a tag.
We use the following low-level functions to build the output buffer:
Our schema abstraction contains the appropriate information to encode a message
(ProtobufIndex) and to generate a proto definition file (fields starting with
Protobuf):
typeColumnstructKeyColumnKeyNamestringDisabledbool// [ ]// For protobuf.ProtobufIndexprotowire.NumberProtobufTypeprotoreflect.Kind// Uint64Kind, Uint32Kind, ProtobufEnummap[int]stringProtobufEnumNamestringProtobufRepeatedbool
We have a few helper methods around the protowire functions to directly
encode the fields while decoding the flows. They skip disabled fields or
non-repeated fields already encoded. Here is an excerpt of the sFlow
decoder:
For fields that are required later in the pipeline, like source and destination
addresses, they are stored unencoded in a separate structure:
typeFlowMessagestructTimeReceiveduint64SamplingRateuint32// For exporter classifierExporterAddressnetip.Addr// For interface classifierInIfuint32OutIfuint32// For geolocation or BMPSrcAddrnetip.AddrDstAddrnetip.AddrNextHopnetip.Addr// Core component may override themSrcASuint32DstASuint32GotASPathbool// protobuf is the protobuf representation for the information not contained above.protobuf[]byteprotobufSetbitset.BitSet
The protobuf slice holds encoded data. It is initialized with a capacity of
500 bytes to avoid resizing during encoding. There is also some reserved room at
the beginning to be able to encode the total size as a variable-width integer.
Upon finalizing encoding, the remaining fields are added and the message length
is prefixed:
func(schema*Schema)ProtobufMarshal(bf*FlowMessage)[]byteschema.ProtobufAppendVarint(bf,ColumnTimeReceived,bf.TimeReceived)schema.ProtobufAppendVarint(bf,ColumnSamplingRate,uint64(bf.SamplingRate))schema.ProtobufAppendIP(bf,ColumnExporterAddress,bf.ExporterAddress)schema.ProtobufAppendVarint(bf,ColumnSrcAS,uint64(bf.SrcAS))schema.ProtobufAppendVarint(bf,ColumnDstAS,uint64(bf.DstAS))schema.ProtobufAppendIP(bf,ColumnSrcAddr,bf.SrcAddr)schema.ProtobufAppendIP(bf,ColumnDstAddr,bf.DstAddr)// Add length and move it as a prefixend:=len(bf.protobuf)payloadLen:=end-maxSizeVarintbf.protobuf=protowire.AppendVarint(bf.protobuf,uint64(payloadLen))sizeLen:=len(bf.protobuf)-endresult:=bf.protobuf[maxSizeVarint-sizeLen:end]copy(result,bf.protobuf[end:end+sizeLen])returnresult
Minimizing allocations is critical for maintaining encoding performance. The
benchmark tests should be run with the -benchmem flag to monitor allocation
numbers. Each allocation incurs an additional cost to the garbage collector. The
Go profiler is a valuable tool for identifying areas of code that can be
optimized:
$ gotest-run=__nothing__-bench=Netflow/with_encoding\> -benchmem-cpuprofileprofile.out\> akvorado/inlet/flow
goos: linuxgoarch: amd64pkg: akvorado/inlet/flowcpu: AMD Ryzen 5 5600X 6-Core ProcessorNetflow/with_encoding-12 143953 7955 ns/op 8256 B/op 134 allocs/opPASSok akvorado/inlet/flow 1.418s$ gotoolpprofprofile.out
File: flow.testType: cpuTime: Feb 4, 2023 at 8:12pm (CET)Duration: 1.41s, Total samples = 2.08s (147.96%)Entering interactive mode (type "help" for commands, "o" for options)(pprof)web
After using the internal schema instead of code generated from the
proto definition file, the performance improved. However, this comparison is not
entirely fair as less information is being decoded and previously GoFlow2 was
decoding to its own structure, which was then copied to our own version.
As for testing, we use github.com/jhump/protoreflect: the
protoparse package parses the proto definition file we generate and the
dynamic package decodes the messages. Check the ProtobufDecode()
method for more details.4
To get the final figures, I have also optimized the decoding in GoFlow2. It
was relying heavily on binary.Read(). This function may use
reflection in certain cases and each call allocates a byte array to read data.
Replacing it with a more efficient version provides the following
improvement:
It is now easier to collect new data and the inlet component is faster!
Notice
Some paragraphs were editorialized by ChatGPT, using
editorialize and keep it short as a prompt. The result was proofread by a
human for correctness. The main idea is that ChatGPT should be better at
English than me.
While empty fields are not serialized to Protocol Buffers, empty
columns in ClickHouse take some space, even if they compress well.
Moreover, unused fields are still decoded and they may clutter the
interface.
There is a similar function using NetFlow. NetFlow and IPFIX
protocols are less complex to decode than sFlow as they are using a simpler
TLV structure.
vtprotobuf generates more optimized Go code by removing an
abstraction layer. It directly generates the code encoding each field to
bytes:
This is another Amazon collection of short fiction, this time mostly at
novelette length. (The longer ones might creep into novella.) As before,
each one is available separately for purchase or Amazon Prime "borrowing,"
with separate ISBNs. The sidebar cover is for the first in the sequence.
(At some point I need to update my page templates so that I can add
multiple covers.)
N.K. Jemisin's "Emergency Skin" won the 2020 Hugo Award for Best
Novelette, so I wanted to read and review it, but it would be too short
for a standalone review. I therefore decided to read the whole collection
and review it as an anthology.
This was a mistake. Learn from my mistake.
The overall theme of the collection is technological advance, rapid
change, and the ethical and social question of whether we should slow
technology because of social risk. Some of the stories stick to that
theme more closely than others. Jemisin's story mostly ignores it, which
was probably the right decision.
"Ark" by Veronica Roth: A planet-killing asteroid has been on
its inexorable way towards Earth for decades. Most of the planet has been
evacuated. A small group has stayed behind, cataloging samples and
filling two remaining ships with as much biodiversity as they can find
with the intent to leave at the last minute. Against that backdrop, two
of that team bond over orchids.
If you were going "wait, what?" about the successful evacuation of Earth,
yeah, me too. No hint is offered as to how this was accomplished. Also,
the entirety of humanity abandoned mutual hostility and national borders
to cooperate in the face of the incoming disaster, which is, uh, bizarrely
optimistic for an otherwise gloomy story.
I should be careful about how negative I am about this story because I am
sure it will be someone's favorite. I can even write part of the positive
review: an elegiac look at loss, choices, and the meaning of a life, a
moving look at how people cope with despair. The writing is fine, the
story structure works; it's not a bad story. I just found it monumentally
depressing, and was not engrossed by the emotionally abused protagonist's
unresolved father issues. I can imagine a story around the same facts and
plot that I would have liked much better, but all of these people need
therapy and better coping mechanisms.
I'm also not sure what this had to do with the theme, given that the
incoming asteroid is random chance and has nothing to do with
technological development. (4)
"Summer Frost" by Blake Crouch: The best part of this story is
the introductory sequence before the reader knows what's going on, which
is full of evocative descriptions. I'm about to spoil what is going on,
so if you want to enjoy that untainted by the stupidity of the rest of the
plot, skip the rest of this story review.
We're going to have a glut of stories about the weird and obsessive form
of AI risk invented by the fevered imaginations of the "rationalist"
community, aren't we. I don't know why I didn't predict that. It's going
to be just as annoying as the glut of cyberpunk novels written by people
who don't understand computers.
Crouch lost me as soon as the setup is revealed. Even if I believed that
a game company would use a deep learning AI still in learning mode
to run an NPC (I don't; see
Microsoft's Tay for an obvious reason why not), or that such an NPC
would spontaneously start testing the boundaries of the game world (this
is not how deep learning works), Crouch asks the reader to believe that
this AI started as a fully scripted NPC in the prologue with a
fixed storyline. In other words, the foundation of the story is that this
game company used an AI model capable of becoming a general intelligence
for barely more than a cut scene.
This is not how anything works.
The rest of the story is yet another variation on a science fiction plot
so old and threadbare that Isaac Asimov invented the Three Laws of
Robotics to avoid telling more versions of it. Crouch's contribution is
to dress it up in the terminology of the excessively online. (The middle
of the story features a detailed discussion of
Roko's basilisk;
if you recognize that, you know what you're in for.) Asimov would not
have had a lesbian protagonist, so points for progress I guess, but the AI
becomes more interesting to the protagonist than her wife and kid because
of course it does. There are a few twists and turns along the way, but
the destination is the bog-standard hard-takeoff general intelligence
scenario.
One more pet peeve: Authors, stop trying to illustrate the growth of your
AI by having it move from broken to fluent English. English grammar is so
much easier than self-awareness or the Turing test that we had programs
that could critique your grammar decades before we had believable
chatbots. It's going to get grammar right long before the content of the
words makes any sense. Also, your AI doesn't sound dumber, your AI sounds
like someone whose native language doesn't use pronouns and helper verbs
the way that English does, and your decision to use that as a marker for
intelligence is, uh, maybe something you should think about. (3)
"Emergency Skin" by N.K. Jemisin: The protagonist is a
heavily-augmented cyborg from a colony of Earth's diaspora. The founders
of that colony fled Earth when it became obvious to them that the planet
was dying. They have survived in another star system, but they need a
specific piece of technology from the dead remnants of Earth. The
protagonist has been sent to retrieve it.
The twist is that this story is told in the second-person perspective by
the protagonist's ride-along AI, created from a consensus model of the
rulers of the colony. We never see directly what the protagonist is doing
or thinking, only the AI reaction to it. This is exactly the sort of
gimmick that works much better in short fiction than at novel length.
Jemisin uses it to create tension between the reader and the narrator, and
I thoroughly enjoyed the effect. (As shown in the
Broken Earth trilogy, Jemisin is one of the few
writers who can use second-person effectively.)
I won't spoil the revelation, but it's barbed and biting and vicious and I
loved it. Jemisin does deliver the point with a sledgehammer, so be aware
of that if you want subtlety in your short fiction, but I prefer the
bluntness. (This is part of why I usually don't get along with literary
short stories.) The reader of course can't change the direction of the
story, but the second-person perspective still provides a hit of vicarious
satisfaction. I can see why this won the Hugo; it's worth seeking out.
(8)
"You Have Arrived at Your Destination" by Amor Towles: Sam and
his wife are having a child, and they've decided to provide him with an
early boost in life. Vitek is a fertility lab, but more than that, it can
do some gene tweaking and adjustment to push a child more towards one
personality or another. Sam and his wife have spent hours filling out
profiles, and his wife spent hours weeding possible choices down to three.
Now, Sam has come to Vitek to pick from the remaining options.
Speaking of literary short stories, Towles is the non-SFF writer of this
bunch, and it's immediately obvious. The story requires the SFnal
premise, but after that this is a character piece. Vitek is an elite,
expensive company with a condescending and overly-reductive attitude
towards humanity, which is entirely intentional on the author's part.
This is the sort of story that gets resolved in an unexpected conversation
in a roadside bar, and where most of the conflict happens inside the
protagonist's head.
I was initially going to complain that Towles does the standard literary
thing of leaving off the denouement on the grounds that the reader can
figure it out, but when I did a bit of re-reading for this review, I found
more of the bones than I had noticed the first time. There's enough
subtlety that I had to think for a bit and re-read a passage, but not too
much. It's also the most thoughtful treatment of the theme of the
collection, the only one that I thought truly wrestled with the weird
interactions between technological capability and human foresight. Next
to "Emergency Skin," this was the best story of the collection. (7)
"The Last Conversation" by Paul Tremblay: A man wakes up in a
dark room, in considerable pain, not remembering anything about his life.
His only contact with the world at first is a voice: a woman who is
helping him recover his strength and his memory. The numbers that head
the chapters have significant gaps, representing days left out of the
story, as he pieces together what has happened alongside the reader.
Tremblay is the horror writer of the collection, so predictably this is
the story whose craft I can admire without really liking it. In this
case, the horror comes mostly from the pacing of revelation, created by
the choice of point of view. (This would be a much different story from
the perspective of the woman.) It's well-done, but it has the tendency
I've noticed in other horror stories of being a tightly closed system. I
see where the connection to the theme is, but it's entirely in the
setting, not in the shape of the story.
Not my thing, but I can see why it might be someone else's. (5)
"Randomize" by Andy Weir: Gah, this was so bad.
First, and somewhat expectedly, it's a clunky throwback to a 1950s-style
hard SF puzzle story. The writing is atrocious: wooden, awkward, cliched,
and full of gratuitous infodumping. The characterization is almost
entirely through broad stereotypes; the lone exception is the female
character, who at least adds an interesting twist despite being forced to
act like an idiot because of the plot. It's a very old-school type of
single-twist story, but the ending is completely implausible and falls
apart if you breathe on it too hard.
Weir is something of a throwback to an earlier era of scientific puzzle
stories, though, so maybe one is inclined to give him a break on the
writing quality. (I am not; one of the ways in which science fiction has
improved is that you can get good scientific puzzles and good
writing these days.) But the science is also so bad that I was literally
facepalming while reading it.
The premise of this story is that quantum computers are commercially
available. That will cause a serious problem for Las Vegas casinos,
because the generator for keno
numbers is vulnerable to quantum algorithms. The solution proposed by the
IT person for the casino? A quantum random number generator. (The words
"fight quantum with quantum" appear literally in the text if you're
wondering how bad the writing is.)
You could convince me that an ancient keno system is using a pseudorandom
number generator that might be vulnerable to some quantum algorithm and
doesn't get reseeded often enough. Fine. And yes, quantum computers can
be used to generate high-quality sources of random numbers. But this
solution to the problem makes no sense whatsoever. It's like swatting a
house fly with a nuclear weapon.
Weir says explicitly in the story that all the keno system needs is an
external source of high-quality random numbers. The next step is to go to
Amazon and buy a hardware random number generator. If you want to
splurge, it might cost you $250. Problem solved. Yes, hardware random
number generators have various limitations that may cause you problems if
you need millions of bits or you need them very quickly, but not for
something as dead-simple and with such low entropy requirements as keno
numbers! You need a trivial number of bits for each round; even the
slowest and most conservative hardware random number generator would be
fine. Hell, measure the noise levels on the casino floor.
Point a camera at a lava
lamp. Or just buy one of the physical ball machines they use for the
lottery. This problem is heavily researched, by casinos in
particular, and is not significantly changed by the availability of
quantum computers, at least for applications such as keno where the
generator can be reseeded before each generation.
You could maybe argue that this is an excuse for the IT guy to get his
hands on a quantum computer, which fits the stereotypes, but that still
breaks the story for reasons that would be spoilers. As soon as any other
casino thought about this, they'd laugh in the face of the characters.
I don't want to make too much of this, since anyone can write one bad
story, but this story was dire at every level. I still owe Weir a proper
chance at novel length, but I can't say this added to my enthusiasm. (2)
Rating: 4 out of 10
The Truth is the 25th Discworld novel. Some reading order guides
group it loosely into an "industrial revolution" sequence following
Moving Pictures, but while there are
thematic similarities I'll talk about in a moment, there's no real plot
continuity. You could arguably start reading Discworld here, although
you'd be spoiled for some character developments in the early Watch
novels.
William de Worde is paid to write a newsletter. That's not precisely what
he calls it, and it's not clear whether his patrons know that he publishes
it that way. He's paid to report on news of Ankh-Morpork that may be of
interest of various rich or influential people who are not in
Ankh-Morpork, and he discovered the best way to optimize this was to write
a template of the newsletter, bring it to an engraver to make a plate of
it, and run off copies for each of his customers, with some minor
hand-written customization. It's a comfortable living for the estranged
younger son of a wealthy noble. As the story opens, William is dutifully
recording the rumor that dwarfs have discovered how to turn lead into
gold.
The rumor is true, although not in the way that one might initially
assume.
The world is made up of four elements: Earth, Air, Fire, and Water.
This is a fact well known even to Corporal Nobbs. It's also wrong.
There's a fifth element, and generally it's called Surprise.
For example, the dwarfs found out how to turn lead into gold by doing
it the hard way. The difference between that and the easy way is that
the hard way works.
The dwarfs used the lead to make a movable type printing press, which is
about to turn William de Worde's small-scale, hand-crafted newsletter into
a newspaper.
The movable type printing press is not unknown technology. It's banned
technology, because the powers that be in Ankh-Morpork know enough to be
deeply suspicious of it. The religious establishment doesn't like it
because words are too important and powerful to automate. The nobles and
the Watch don't like it because cheap words cause problems. And the
engraver's guild doesn't like it for obvious reasons. However, Lord
Vetinari knows that one cannot apply brakes to a volcano, and commerce
with the dwarfs is very important to the city. The dwarfs can continue.
At least for now.
As in Moving Pictures, most of The Truth is an idiosyncratic
speedrun of the social effects of a new technology, this time newspapers.
William has no grand plan; he's just an observant man who likes to write,
cares a lot about the truth, and accidentally stumbles into editing a
newspaper. (This, plus being an estranged son of a rich family, feels
very on-point for journalism.) His naive belief is that people want to
read true things, since that's what his original patrons wanted. Truth,
however, may not be in the top five things people want from a newspaper.
This setup requires some narrative force to push it along, which is
provided by a plot to depose Vetinari by framing him for murder. The most
interesting part of that story is Mr. Pin and Mr. Tulip, the people hired
to do the framing and then dispose of the evidence. They're a classic
villain type: the brains and the brawn, dangerous, terrifying, and willing
to do horrible things to people. But one thing Pratchett excels at is
taking a standard character type, turning it a bit sideways, and stuffing
in things that one wouldn't think would belong. In this case, that's
Mr. Tulip's deep appreciation for, and genius grasp of, fine art. It
should not work to have the looming, awful person with anger issues be
able to identify the exact heritage of every sculpture and fine piece of
goldsmithing, and yet somehow it does.
Also as in Moving Pictures (and, in a different way,
Soul Music), Pratchett tends to
anthropomorphize technology, giving it a life and motivations of its own.
In this case, that's William's growing perception of the press as an
insatiable maw into which one has to feed words. I'm usually dubious of
shifting agency from humans to things when doing social analysis (and
there's a lot of social analysis here), but I have to concede that
Pratchett captures something deeply true about the experience of feedback
loops with an audience. A lot of what Pratchett puts into this book about
the problematic relationship between a popular press and the truth is
obvious and familiar, but he also makes some subtle points about the way
the medium shapes what people expect from it and how people produce
content for it that are worthy of
Marshall McLuhan.
The interactions between William and the Watch were less satisfying. In
our world, the US press is, with only rare exceptions, a thoughtless PR
organ for police propaganda and the
exonerative tense. Pratchett tackles that here... sort of. William
vaguely grasps that his job as a reporter may be contrary to the job of
the Watch to maintain order, and Vimes's ambivalent feelings towards
"solving crimes" push the story in that direction. But this is also
Vimes, who is clearly established as one of the good sort and therefore is
a bad vehicle for talking about how the police corrupt the press.
Pratchett has Vimes and Vetinari tacitly encourage William, which works
within the story but takes the pressure off the conflict and leaves
William well short of understanding the underlying politics. There's a
lot more that could be said about the tension between the press and the
authorities, but I think the Discworld setup isn't suitable for it.
This is the sort of book that benefits from twenty-four volumes of
backstory and practice. Pratchett's Ankh-Morpork cast ticks along like a
well-oiled machine, which frees up space that would otherwise have to be
spent on establishing secondary characters. The result is a lot of plot
and social analysis shoved into a standard-length Discworld novel, and a
story that's hard to put down. The balance between humor and plot is just
about perfect, the references and allusions aren't overwhelming, and the
supporting characters, both new and old, are excellent. We even get a
good Death sequence. This is solid, consistent stuff: Discworld as a
mature, well-developed setting with plenty of stories left to tell.
Followed by Thief of Time in publication order, and later by
Monstrous Regiment in the vaguely-connected industrial revolution
sequence.
Rating: 8 out of 10
In Mexico, we have the great luck to live among vestiges of long-gone
cultures, some that were conquered and in some way got adapted and
survived into our modern, mostly-West-Europan-derived society, and
some that thrived but disappeared many more centuries ago. And
although not everybody feels the same way, in my family we have always
enjoyed visiting archaeological sites when I was a child and today.
Some of the regulars that follow this blog (or its syndicators) will
remember Xochicalco, as it was the destination we chose for the
daytrip back in the day, in DebConf6 (May 2006).
This weekend, my mother suggested us to go there, as being Winter, the
weather is quite pleasant we were at about 25 C, and by the hottest
months of the year it can easily reach 10 more; the place lacks
shadows, like most archaeological sites, and it does get quite
tiring nevertheless!
Xochicalco is quite unique among our archaeological sites, as it was
built as a conference city: people came from cultures spanning all
of Mesoamerica to debate and homogeneize the calendars used in the
region. The first photo I shared here is by the Quetzalc atl temple,
where each of the four sides shows people from different cultures (the
styles in which they are depicted follow their local
self-representations), encodes equivalent dates in the different
calendaric systems, and are located along representationsof the God of
knowledge, the feathered serpent, Quetzalc atl.
It was a very nice day out. And, of course, it brought back memories
of my favorite conference visiting the site of a very important
conference
Tess of the Road is the first book of a YA fantasy duology set in
the same universe as Seraphina and
Shadow Scale.
It's hard to decide what to say about reading order (and I now appreciate
the ambiguous answers I got). Tess of the Road is a sequel to
Seraphina and Shadow Scale in the sense that there are
numerous references to the previous duology, but it has a different
protagonist and different concerns. You don't need to read the other
duology first, but Tess of the Road will significantly spoil the
resolution of the romance plot in Seraphina, and it will be obvious
that you've skipped over background material. That said, Shadow
Scale is not a very good book, and this is a much better book.
I guess the summary is this: if you're going to read the first duology,
read it first, but don't feel obligated to do so.
Tess was always a curious, adventurous, and some would say unmanageable
girl, nothing like her twin. Jeanne is modest, obedient, virtuous, and
practically perfect in every way. Tess is not; after a teenage love
affair resulting in an out-of-wedlock child and a boy who disappeared
rather than marry her, their mother sees no alternative but to lie about
which of the twins is older. If Jeanne can get a good match among the
nobility, the family finances may be salvaged. Tess's only remaining use
is to help her sister find a match, and then she can be shuffled off to a
convent.
Tess throws herself into court politics and does exactly what she's
supposed to. She engineers not only a match, but someone Jeanne sincerely
likes. Tess has never lacked competence. But this changes nothing about
her mother's view of her, and Tess is depressed, worn, and desperately
miserable in Jeanne's moment of triumph. Jeanne wants Tess to stay and
become the governess of her eventual children, retaining their twin bond
of the two of them against the world. Their older sister Seraphina, more
perceptively, tries to help her join an explorer's expedition. Tess, in a
drunken spiral of misery, insults everyone and runs away, with only a new
pair of boots and a pack of food.
This is going to be one of those reviews where the things I didn't like
are exactly the things other readers liked. I see why people loved this
book, and I wish I had loved it too. Instead, I liked parts of it a great
deal and found other parts frustrating or a bit too off-putting. Mostly
this is a preference problem rather than a book problem.
My most objective complaint is the pacing, which was also my primary
complaint about Shadow Scale. It was not hard to see where Hartman
was going with the story, I like that story, I was on board with going
there, but getting there took for-EV-er. This is a 536 page book that I
would have edited to less than 300 pages. It takes nearly a hundred pages
to get Tess on the road, and while some of that setup is necessary, I did
not want to wallow in Tess's misery and appalling home life for quite that
long.
A closely related problem is that Hartman continues to love flashbacks.
Even after Tess has made her escape, we get the entire history of her
awful teenage years slowly dribbled out over most of the book. Sometimes
this is revelatory; mostly it's depressing. I had guessed the outlines of
what had happened early in the book (it's not hard), and that was more
than enough motivation for me, but Hartman was determined to make the
reader watch every crisis and awful moment in detail. This is exactly
what some readers want, and sometimes it's even what I want, but not here.
I found the middle of the book, where the story is mostly flashbacks and
flailing, to be an emotional slog.
Part of the problem is that Tess has an abusive mother and goes through
the standard abuse victim process of being sure that she's the one who's
wrong and that her mother is justified in her criticism. This is
certainly realistic, and it eventually lead to some satisfying catharsis
as Tess lets go of her negative self-image. But Tess's mother is a
narcissistic religious fanatic with a persecution complex and not a single
redeeming quality whatsoever, and I loathed reading about her, let
alone reading Tess tiptoeing around making excuses for her. The point of
this in the story is for Tess to rebuild her self-image, and I get it, and
I'm sure this will work for some readers, but I wanted Tess's mother (and
the rest of her family except her sisters) to be eaten by dragons. I do
not like the emotional experience of hating a character in a book this
much.
Where Tess of the Road is on firmer ground is when Tess has an
opportunity to show her best qualities, such as befriending a quigutl in
childhood and, in the sort of impulsive decision that shows her at her
best, learning their language. (For those who haven't read the previous
books, quigutls are a dog-sized subspecies of dragon that everyone usually
treats like intelligent animals, although they're clearly more than that.)
Her childhood quigutl friend Pathka becomes her companion on the road,
which both gives her wanderings some direction and adds some useful
character interaction.
Pathka comes with a plot that is another one of those elements that I
think will work for some readers but didn't work for me. He's in search
of a Great Serpent, a part of quigutl mythology that neither humans or
dragons pay attention to. That becomes the major plot of the novel apart
from Tess's emotional growth. Pathka also has a fraught relationship with
his own family, which I think was supposed to parallel Tess's
relationships but never clicked for me. I liked Tess's side of this
relationship, but Pathka is weirdly incomprehensible and apparently fickle
in ways that I found unsatisfying. I think Hartman was going for an alien
tone that didn't quite work for me.
This is a book that gets considerably better as it goes along, and the
last third of the book was great. I didn't like being dragged through the
setup, but I loved the character Tess became. Once she reaches the road
crew, this was a book full of things that I love reading about. The
contrast between her at the start of the book and the end is satisfying
and rewarding. Tess's relationship with her twin Jeanne deserves special
mention; their interaction late in the book is note-perfect and much
better than I had expected.
Unfortunately, Tess of the Road doesn't have a real resolution.
It's only the first half of Tess's story, which comes back to that pacing
problem. Ah well.
I enjoyed this but I didn't love it. The destination was mostly worth the
journey, but I thought the journey was much too long and I had to spend
too much time in the company of people I hated far more intensely than was
comfortable. I also thought the middle of the book sagged, a problem I
have now had with two out of three of Hartman's books. But I can see why
other readers with slightly different preferences loved it. I'm probably
invested enough to read the sequel, although I'm a bit grumbly that the
sequel is necessary.
Followed by In the Serpent's Wake.
Rating: 7 out of 10
Artifact Space is a military (mostly) science fiction novel, the
first of an expected trilogy. Christian Cameron is a prolific author of
historical fiction under that name, thrillers under the name Gordon Kent,
and historical fantasy under the name Miles Cameron. This is his first
science fiction novel.
Marca Nbaro is descended from one of the great spacefaring mercantile
families, but it's not doing her much good. She is a ward of the
Orphanage, the boarding school for orphaned children of the DHC, generous
in theory and a hellhole in practice. Her dream to serve on one of the
Greatships, the enormous interstellar vessels that form the backbone of
the human trading network, has been blocked by the school authorities, a
consequence of the low-grade war she's been fighting with them throughout
her teenage years. But Marca is not a person to take no for an answer.
Pawning her family crest gets her just enough money to hire a hacker to
doctor her school records, adding the graduation she was denied and
getting her aboard the Greatship Athens as a new Midshipper.
I don't read a lot of military science fiction, but there is one type of
story that I love that military SF is uniquely well-suited to tell. It's
not the combat or the tactics or the often-trite politics. It's the
experience of the military as a system, a collective human endeavor.
One ideal of the military is that people come to it from all sorts of
backgrounds, races, and social classes, and the military incorporates them
all into a system built for a purpose. It doesn't matter who you are or
what you did before: if you follow the rules, do your job, and become part
of a collaboration larger than yourself, you have a place and people to
watch your back whether or not they know you or like you. Obviously, like
any ideal, many militaries don't live up to this, and there are many
stories about those failures. But the story of that ideal, told well, is
a genre I like a great deal and is hard to find elsewhere.
This sort of military story shares some features with found family, and
it's not a coincidence that I also like found family stories. But found
family still assumes that these people love you, or at least like you.
For some protagonists, that's a tricky barrier both to cross and to
believe one has crossed. The (admittedly idealized) military doesn't
assume anyone likes you. It doesn't expect that you or anyone around you
have the right feelings. It just expects you to do your job and work with
other people who are doing their job. The requirements are more concrete,
and thus in a way easier to believe in.
Artifact Space is one of those military science fiction stories. I
was entirely unsurprised to see that the author is a former US Navy career
officer.
The Greatships here are, technically, more of a merchant marine than a
full-blown military. (The author noted in an interview that he based
them on the merchant ships of Venice.) The weapons are used primarily for
defense; the purpose of the Greatships is trade, and every crew member has
a storage allotment in the immense cargo area that they're encouraged to
use. The setting is in the far future, after a partial collapse and
reconstruction of human society, in which humans have spread through
interstellar space, settled habitable planets, and built immense orbital
cities. The Athens is trading between multiple human settlements,
but its true destination is far into the deep black: Tradepoint, where it
can trade with the mysterious alien Starfish for xenoglas, a material that
humans have tried and failed to reproduce and on which much of human
construction now depends.
This is, to warn, one of those stories where the scrappy underdog of noble
birth makes friends with everyone and is far more competent than anyone
expects. The story shape is not going to surprise you, and you have to
have considerable tolerance for it to enjoy this book. Marca is
ridiculously, absurdly central to the plot for a new Middie. Sometimes
this makes sense given her history; other times, she is in the middle of
improbable accidents that felt forced by the author. Cameron doesn't
entirely break normal career progression, but Marca is very special in a
way that you only get to be as the protagonist of a novel.
That said, Cameron does some things with that story shape that I liked.
Marca's hard-won survival skills are not weirdly well-suited for
her new life aboard ship. To the contrary, she has to unlearn a lot of
bad habits and let go of a lot of anxiety. I particularly liked her
relationship with her more-privileged cabin mate, which at first seemed to
only be a contrast between Thea's privilege and Marca's background, but
turned into both of them learning from each other. There's a great mix of
supporting characters, with a wide variety of interactions with Marca and
a solid sense that all of the characters have their own lives and their
own concerns that don't revolve around her.
There is, of course, a plot to go with this. I haven't talked about it
much because I think the summaries of this book are a bit of a spoiler,
but there are several layers of political intrigue, threats to the ship,
an interesting AI, and a good hook in the alien xenoglas trade. Cameron
does a deft job balancing the plot with Marca's training and her
slow-developing sense of place in the ship (and fear about discovery of
her background and hacking). The pacing is excellent, showing all the
skill I'd expect from someone with a thriller background and over forty
prior novels under his belt. Cameron portrays the tedious work of
learning a role on a ship without boring the reader, which is a tricky
balancing act.
I also like the setting: a richly multicultural future that felt like it
included people from all of Earth, not just the white western parts. That
includes a normalized androgyne third gender, which is the sort of thing
you rarely see in military SF. Faster-than-light travel involves typical
physics hand-waving, but the shape of the hand-waving is one I've not seen
before and is a great excuse for copying the well-known property of
oceangoing navies that longer ships can go faster.
(One tech grumble, though: while Cameron does eventually say that this is
a known tactic and Marca didn't come up with anything novel, deploying
spread sensors for greater resolution is sufficiently obvious it should be
standard procedure, and shouldn't have warranted the character reactions
it got.)
I thoroughly enjoyed this. Artifact Space is the best military SF
that I've read in quite a while, at least back to John G. Hemry's
JAG in space novels and probably better than
those. It's going to strike some readers, with justification, as cliched,
but the cliches are handled so well that I had only minor grumbling at a
few absurd coincidences. Marca is a great character who is easy to care
about. The plot was tense and satisfying, and the feeling of military
structure, tradition, jargon, and ship pride was handled well. I had a
very hard time putting this down and was sad when it ended.
If you're in the mood for that class of "learning how to be part of a
collaborative structure" style of military SF, recommended.
Artifact Space reaches a somewhat satisfying conclusion, but leaves
major plot elements unresolved. Followed by Deep Black, which
doesn't have a release date at the time of this writing.
Rating: 9 out of 10
The Fifth Elephant is the 24th Discworld and fifth Watch novel, and
largely assumes you know who the main characters are. This is not a good
place to start.
The dwarves are electing a new king. The resulting political conflict is
spilling over into the streets of Ankh-Morpork, but that's not the primary
problem. First, the replica Scone of Stone, a dwarven artifact used to
crown the Low King of the Dwarves, is stolen from the Dwarf Bread Museum.
Then, Vimes is dispatched to berwald, ostensibly to negotiate increased
fat exports with the new dwarven king. And then Angua disappears,
apparently headed towards her childhood home in berwald, which
immediately prompts Carrot to resign and head after her. The City Watch
is left in the hands of now-promoted Captain Colon.
We see lots of Lady Sybil for the first time since
Guards! Guards!, and there's a
substantial secondary plot with Angua and Carrot and a tertiary plot with
Colon making a complete mess of things back home, but this is mostly a
Vimes novel. As usual, Vetinari is pushing him outside of his comfort
zone, but he's not seriously expecting Vimes to act like an ambassador.
He's expecting Vimes to act like a policeman, even though he's way outside
his jurisdiction. This time, that means untangling a messy three-sided
political situation involving the dwarves, the werewolves, and the
vampires.
There is some Igor dialogue in this book, but
thankfully Pratchett toned it down a lot and it never started to bother
me.
I do enjoy Pratchett throwing Vimes and his suspicious morality at
political problems and watching him go at them sideways. Vimes's
definition of crimes is just broad enough to get him fully invested in a
problem, but too narrow to give him much patience with the diplomatic
maneuvering. It makes him an unpredictable diplomat in a clash of
cultures way that's fun to read about. Cheery and Detritus are great
traveling companions for this, since both of them also unsettle the
dwarves in wildly different ways.
I also have to admit that Pratchett is doing more interesting things with
the Angua and Carrot relationship than I had feared. In previous books, I
was getting tired of their lack of communication and wasn't buying the
justifications for it, but I think I finally understand why the
communication barriers are there. It's not that Angua refuses to talk to
Carrot (although there's still a bit of that going on). It's that
Carrot's attitude towards the world is very strange, and gets
stranger the closer you are to him.
Carrot has always been the character who is too earnest and
straightforward and good for Ankh-Morpork and yet somehow makes it work,
but Pratchett is doing something even more interesting with the concept of
nobility. A sufficiently overwhelming level of heroic ethics becomes
almost alien, so contrary to how people normally think that it can make
conversations baffling. It's not that Carrot is perfect (sometimes he
does very dumb things), it's that his natural behavior follows a set of
ethics that humans like to pretend they follow but actually don't and
never would entirely. His character should be a boring cliche or an
over-the-top parody, and yet he isn't at all.
But Carrot's part is mostly a side plot. Even more than
Jingo, The Fifth Elephant is
establishing Vimes as a force to be reckoned with, even if you take him
outside his familiar city. He is in so many ways the opposite of
Vetinari, and yet he's a tool that Vetinari is extremely good at using.
Colon of course is a total disaster as the head of the Watch, and that's
mostly because Colon should never be more than a sergeant, but it's also
because even when he's taking the same action as Vimes, he's not doing it
for the same reasons or with the same stubborn core of basic morality and
loyalty that's under Vimes's suspicious conservatism.
The characterization in the Watch novels doesn't seem that subtle or deep
at first, but it accumulates over the course of the series in a way that I
think is more effective than any of the other story strands. Vetinari,
Vimes, and Carrot all represent "right," or at least order, in overlapping
stories of right versus wrong, but they do so in radically different ways
and with radically different goals. Each time one of them seems
ascendant, each time one of their approaches seems more clearly correct,
Pratchett throws them at a problem where a different approach is required.
It's a great reading experience.
This was one of the better Discworld novels even though I found the
villains to be a bit tedious and stupid. Recommended.
Followed by The Truth in publication order. The next Watch novel
is Night Watch.
Rating: 8 out of 10
I loaded up this title with buzzwords. The basic idea is that IM systems shouldn t have to only use the Internet. Why not let them be carried across LoRa radios, USB sticks, local Wifi networks, and yes, the Internet? I ll first discuss how, and then why.
How do set it up
I ve talked about most of the pieces here already:
Delta Chat, which is an IM app that uses mail servers (SMTP and IMAP) as transport, and OpenPGPencryption for security.
Yggdrasil, which forms an auto-mesh network over things like ad-hoc wifi. It s not asynchronous itself, but its properties may be used to build an asyncrhonous email network email itself can be asynchronous across any carrier. Others such as Tor could also be used.
And various other physical carriers such as LoRa and XBee SX radios.
Email servers. For instance, there are existing instructions for running Postfixor Exim over NNCP. These can be easily adapted to run across something like Filespooler instead. These can be run locally on a laptop, or, with a tool such as Termux, on Android.
So, putting this together:
All Delta Chat needs is access to a SMTP and IMAP server. This server could easily reside on localhost.
Existing email servers support transport of email using non-IP transports, including batch transports that can easily store it in files.
These batches can be easily carried by NNCP, Syncthing, Filespooler, etc. Or, if the connectivity is good enough, via traditional networking using Yggdrasil.
Side note: Both NNCP and email servers support various routing arrangements, and can easily use intermediary routing nodes. Syncthing can also mesh. NNCP supports asynchronous multicast, letting your messages opportunistically find the best way to their destination.
OK, so why would you do it?
You might be thinking, doesn t asynchronous mean slow? Well, not necessarily. Asynchronous means reliability is more important than speed ; that is, slow (even to the point of weeks) is acceptable, but not required. NNCP and Syncthing, for instance, can easily deliver within a couple of seconds.
But let s step back a bit. Let s say you re hiking in the wilderness in an area with no connectivity. You get back to your group at a campsite at the end of the day, and have taken some photos of the forest and sent them to some friends. Some of those friends are at the campsite; when you get within signal range, they get your messages right away. Some of those friends are in another country. So one person from your group drives into town and sits at a coffee shop for a few minutes, connected to their wifi. All the messages from everyone in the group go out, all the messages from outside the group come in. Then they go back to camp and the devices exchange messages.
Pretty slick, eh?
Note: this article also has a more permanent home on my website, where it may be periodically updated.
From November 2nd to 4th, 2022, the 19th edition of
Latinoware - Latin American Congress of Free Software
and Open Technologies took place in Foz do Igua u. After 2 years happening
online due to the COVID-19 pandemic, the event was back in person and we felt
Debian Brasil community should be there.
Out last time at Latinoware was in
2016
The Latinoware organization provided the Debian Brazil community with a booth
so that we could have contact with people visiting the open exhibition area and
thus publicize the Debian project. During the 3 days of the event, the booth was
organized by me (Paulo Henrique Santana) as Debian Developer, and by Leonardo
Rodrigues as Debian contributor. Unfortunately Daniel Lenharo had an issue and
could not travel to Foz do Igua u (we miss you there!).
A huge number of people visited the booth, and the beginners (mainly students)
who didn't know Debian, asked what our group was about and we explained various
concepts such as what Free Software is, GNU/Linux distribution and Debian
itself. We also received people from the Brazilian Free Software community and
from other Latin American countries who were already using a GNU/Linux
distribution and, of course, many people who were already using Debian. We had
some special visitors as Jon maddog Hall, Debian Developer Emeritus Ot vio
Salvador, Debian Developer Eriberto Mota, and Debian Maintainers Guilherme de
Paula Segundo and Paulo Kretcheu.
Photo from left to right: Leonardo, Paulo, Eriberto and Ot vio.
Photo from left to right: Paulo, Fabian (Argentina) and Leonardo.
In addition to talking a lot, we distributed Debian stickers that were produced
a few months ago with Debian's sponsorship to be distributed at DebConf22
(and that were left over), and we sold several Debian
t-shirts) produced by
Curitiba Livre community).
We also had 3 talks included in Latinoware official schedule.
I) talked about:
"how to become a Debian contributor by doing translations" and "how the
SysAdmins of a global company use Debian". And
Leonardo) talked about:
"advantages of Open Source telephony in companies".
Photo Paulo in his talk.
Many thanks to Latinoware organization for once again welcoming the Debian
community and kindly providing spaces for our participation, and we
congratulate all the people involved in the organization for the success of
this important event for our community. We hope to be present again in 2023.
We also thank Jonathan Carter for approving financial support from Debian for
our participation at Latinoware.
Portuguese version
De 2 a 4 de novembro de 2022 aconteceu a 19 edi o do
Latinoware - Congresso Latino-americano de Software
Livre e Tecnologias Abertas, em Foz do Igua u. Ap s 2 anos acontecendo de forma
online devido a pandemia do COVID-19, o evento voltou a ser presencial e
sentimos que a comunidade Debian Brasil deveria
estar presente. Nossa ltima participa o no Latinoware foi em
2016
A organiza o do Latinoware cedeu para a comunidade Debian Brasil um estande
para que pud ssemos ter contato com as pessoas que visitavam a rea aberta de
exposi es e assim divulgarmos o projeto Debian.
Durante os 3 dias do evento, o estande foi organizado por mim
(Paulo Henrique Santana) como Desenvolvedor Debian, e
pelo Leonardo Rodrigues como contribuidor Debian. Infelizmente o Daniel Lenharo
teve um imprevisto de ltima hora e n o pode ir para Foz do Igua u (sentimos sua
falta l !).
V rias pessoas visitaram o estande e aquelas mais iniciantes (principalmente
estudantes) que n o conheciam o Debian, perguntavam do que se tratava o nosso
grupo e a gente explicava v rios conceitos como o que Software Livre,
distribui o GNU/Linux e o Debian propriamente dito. Tamb m recebemos pessoas
da comunidade de Software Livre brasileira e de outros pa ses da Am rica Latina
que j utilizavam uma distribui o GNU/Linux e claro, muitas pessoas que j
utilizavam Debian. Tivemos algumas visitas especiais como do Jon maddog Hall,
do Desenvolvedor Debian Emeritus Ot vio Salvador, do Desenvolvedor Debian
Eriberto Mota, e dos Mantenedores Debian Guilherme de
Paula Segundo e Paulo Kretcheu.
Foto da esquerda pra direita: Leonardo, Paulo, Eriberto e Ot vio.
Foto da esquerda pra direita: Paulo, Fabian (Argentina) e Leonardo.
Al m de conversarmos bastante, distribu mos adesivos do Debian que foram
produzidos alguns meses atr s com o patroc nio do Debian para serem distribu dos
na DebConf22(e que haviam sobrado), e vendemos v rias
camisetas do Debian produzidas pela
comunidade Curitiba Livre.
Tamb m tivemos 3 palestras inseridas na programa o oficial do Latinoware.
Eu fiz as palestras:
como tornar um(a) contribuidor(a) do Debian fazendo tradu es e como os
SysAdmins de uma empresa global usam Debian . E o
Leonardo fez a palestra:
vantagens da telefonia Open Source nas empresas .
Foto Paulo na palestra.
Agradecemos a organiza o do Latinoware por receber mais uma vez a comunidade
Debian e gentilmente ceder os espa os para a nossa participa o, e parabenizamos
a todas as pessoas envolvidas na organiza o pelo sucesso desse importante
evento para a nossa comunidade. Esperamos estar presentes novamente em 2023.
Agracemos tamb m ao Jonathan Carter por aprovar o suporte financeiro do Debian
para a nossa participa o no Latinoware.
Vers o em ingl s
De 2 a 4 de novembro de 2022 aconteceu a 19 edi o do
Latinoware - Congresso Latino-americano de Software
Livre e Tecnologias Abertas, em Foz do Igua u. Ap s 2 anos acontecendo de forma
online devido a pandemia do COVID-19, o evento voltou a ser presencial e
sentimos que a comunidade Debian Brasil deveria
estar presente. Nossa ltima participa o no Latinoware foi em
2016
A organiza o do Latinoware cedeu para a comunidade Debian Brasil um estande
para que pud ssemos ter contato com as pessoas que visitavam a rea aberta de
exposi es e assim divulgarmos o projeto Debian.
Durante os 3 dias do evento, o estande foi organizado por mim
(Paulo Henrique Santana) como Desenvolvedor Debian, e
pelo Leonardo Rodrigues como contribuidor Debian. Infelizmente o Daniel Lenharo
teve um imprevisto de ltima hora e n o pode ir para Foz do Igua u (sentimos sua
falta l !).
V rias pessoas visitaram o estande e aquelas mais iniciantes (principalmente
estudantes) que n o conheciam o Debian, perguntavam do que se tratava o nosso
grupo e a gente explicava v rios conceitos como o que Software Livre,
distribui o GNU/Linux e o Debian propriamente dito. Tamb m recebemos pessoas
da comunidade de Software Livre brasileira e de outros pa ses da Am rica Latina
que j utilizavam uma distribui o GNU/Linux e claro, muitas pessoas que j
utilizavam Debian. Tivemos algumas visitas especiais como do Jon maddog Hall,
do Desenvolvedor Debian Emeritus Ot vio Salvador, do Desenvolvedor Debian
Eriberto Mota, e dos Mantenedores Debian Guilherme de
Paula Segundo e Paulo Kretcheu.
Foto da esquerda pra direita: Leonardo, Paulo, Eriberto e Ot vio.
Foto da esquerda pra direita: Paulo, Fabian (Argentina) e Leonardo.
Al m de conversarmos bastante, distribu mos adesivos do Debian que foram
produzidos alguns meses atr s com o patroc nio do Debian para serem distribu dos
na DebConf22(e que haviam sobrado), e vendemos v rias
camisetas do Debian produzidas pela
comunidade Curitiba Livre.
Tamb m tivemos 3 palestras inseridas na programa o oficial do Latinoware.
Eu fiz as palestras:
como tornar um(a) contribuidor(a) do Debian fazendo tradu es e como os
SysAdmins de uma empresa global usam Debian . E o
Leonardo fez a palestra:
vantagens da telefonia Open Source nas empresas .
Foto Paulo na palestra.
Agradecemos a organiza o do Latinoware por receber mais uma vez a comunidade
Debian e gentilmente ceder os espa os para a nossa participa o, e parabenizamos
a todas as pessoas envolvidas na organiza o pelo sucesso desse importante
evento para a nossa comunidade. Esperamos estar presentes novamente em 2023.
Agracemos tamb m ao Jonathan Carter por aprovar o suporte financeiro do Debian
para a nossa participa o no Latinoware.
Vers o em ingl s
From November 2nd to 4th, 2022, the 19th edition of
Latinoware - Latin American Congress of Free Software
and Open Technologies took place in Foz do Igua u. After 2 years happening
online due to the COVID-19 pandemic, the event was back in person and we felt
Debian Brasil community should be there.
Out last time at Latinoware was in
2016
The Latinoware organization provided the Debian Brazil community with a booth
so that we could have contact with people visiting the open exhibition area and
thus publicize the Debian project. During the 3 days of the event, the booth was
organized by me (Paulo Henrique Santana) as Debian Developer, and by Leonardo
Rodrigues as Debian contributor. Unfortunately Daniel Lenharo had an issue and
could not travel to Foz do Igua u (we miss you there!).
A huge number of people visited the booth, and the beginners (mainly students)
who didn't know Debian, asked what our group was about and we explained various
concepts such as what Free Software is, GNU/Linux distribution and Debian
itself. We also received people from the Brazilian Free Software community and
from other Latin American countries who were already using a GNU/Linux
distribution and, of course, many people who were already using Debian. We had
some special visitors as Jon maddog Hall, Debian Developer Emeritus Ot vio
Salvador, Debian Developer Eriberto Mota, and Debian Maintainers Guilherme de
Paula Segundo and Paulo Kretcheu.
Photo from left to right: Leonardo, Paulo, Eriberto and Ot vio.
Photo from left to right: Paulo, Fabian (Argentina) and Leonardo.
In addition to talking a lot, we distributed Debian stickers that were produced
a few months ago with Debian's sponsorship to be distributed at DebConf22
(and that were left over), and we sold several Debian
t-shirts) produced by
Curitiba Livre community).
We also had 3 talks included in Latinoware official schedule.
I) talked about:
"how to become a Debian contributor by doing translations" and "how the
SysAdmins of a global company use Debian". And
Leonardo) talked about:
"advantages of Open Source telephony in companies".
Photo Paulo in his talk.
Many thanks to Latinoware organization for once again welcoming the Debian
community and kindly providing spaces for our participation, and we
congratulate all the people involved in the organization for the success of
this important event for our community. We hope to be present again in 2023.
We also thank Jonathan Carter for approving financial support from Debian for
our participation at Latinoware.
Portuguese version
The Road to Gandolfo
I think I had read this book almost 10-12 years back and somehow ended up reading it up again. Apparently, he had put this fiction, story, book under some other pen name earlier. It is possible that I might have read it under that name and hence forgotten all about it. This book/story is full of innuendo, irony, sarcasm and basically the thrill of life. There are two main characters in the book, the first is General Mackenzie who has spent almost 3 to 4 decades being a spy/a counterintelligence expert in the Queen s service. And while he outclasses them all even at the ripe age of 50, he is thrown out under the pretext of conduct unbecoming of an officer.
The other main character is Sam Devereaux. This gentleman is an army lawyer and is basically counting the days when he completes his tour of duty as a military lawyer and start his corporate civil law with somebody he knows. It is much to his dismay that while under a week is left for his tour of duty to be left over he is summoned to try and extradite General MacKenzie who has been put on house arrest. Apparently, in China there was a sculpture of a great Chinese gentleman in the nude. For reasons unknown or rather not being shared herein, he basically breaks part of the sculpture. This of course, enrages the Chinese and they call it a diplomatic accident and try to put the General into house arrest. Unfortunately for both the General and his captors, he decides to escape/go for it. While he does succeed entering the American embassy, he finds himself to be person non-grata and is thrown back outside where the Chinese recapture him.
This is where the Embassy & the Govt. decide it would be better if somehow the General could be removed from China permanently so he doesn t cause any further diplomatic accidents. In order to do that Sam s services are bought.
Now in order to understand the General, Sam learns that he has 4 ex-wives. He promptly goes and meet them to understand why the general behaved as he did. He apparently also peed on the American flag. To his surprise, all the four ex-wives are still very much in with the general. During the course of interviewing the ladies he is seduced by them and also gives names to their chests in order to differentiate between each one of them. Later he is seduced by the eldest of the four wives and they spend the evening together.
Next day Sam meets and is promptly manhandled by the general and the diplomatic papers are seen by the general. After meeting the general and the Chinese counterpart, they quickly agree to extradite him as they do not know how to keep the general control. During his stay of house arrest, the General reads one of the communist rags as he puts it and gets the idea to kidnap the pope and that forms the basis of the story.
Castel Gandolfo seems to be a real place which is in Italy and is apparently is the papal residence where s/he goes to reside every winter. The book is written in 1976 hence in the book, the General decides to form a corporation for which he would raise funds in order to make the kidnapping. The amount in 1976 was 40 million dollars and it was a big sum, to be with times, let s think of say 40 billion dollars so gets the scale of things.
Now while a part of me wants to tell the rest of the story, the story isn t really mine to tell. Read The Road to Gandolfo for the rest. While I can t guarantee you much, I can say you might find yourself constantly amused by the antics of both the General, Sam and the General s ex-wives. There are also a few minute characters that you will meet on the way, hope you discover them and enjoy it immensely as I have.
One thing I have to say, while I was reading it, I very much got vibes of Not a penny more, not a penny less by Jeffrey Archer. As shared before, lots of twists and turns, enjoy the ride
Webforms
Webforms are nothing but a form you fill on the web or www. Webforms are and were a thing from early 90s to today. I was supposed to register for https://www.swavlambancard.gov.in/ almost a month back but procrastinated till few couple of days back and with good reason. I was hoping one of my good friends would help me but they had their own thing. So finally, I tried to fill the form few days back. It took me almost 30 odd attempts to finally fill the form and was given an enrollment number. Why it took me 30 odd attempts and with what should tell you the reason
I felt like I was filling the form from 1990 s rather than today because
The form doesn t know either its state or saves data during a session This lesson has been learned a long time back by almost all service providers except Govt. of India. Both the browsers on a mobile as well as desktop can save data during session. If you don t know what I mean by that go to about:preferences#privacy in Firefox and look at Manage Data. There you will find most sites do put some data along with cookies arguably to help make your web experience better. Chrome or Chromium has the same thing perhaps shared under a different name but its the same thing. But that is not all.
None of the fields have any verification. The form is of 3 pages. The verification at the end of the document doesn t tell you what is wrong and what needs to be corrected. Really think on this, I am on a 24 LED monitor and I m filling the form and I had to do it at least 20-30 times before it was accepted. And guess what, I have no clue even about why it was selected. The same data, the same everything and after the nth time it accepted. Now if I am facing such a problem when I have some idea how technology works somewhat how are people who are trying to fill this form on 6 mobiles supposed to do? And many of them not at all clued in technology as I am.
I could go on outlining many of the issues that I faced but they are all similar in many ways the problems faced while filling the NEW Income Tax forms. Of course the New Income Tax portal is a whole ball-game in itself as it gives new errors every time instead of solving them. Most C.A. s have turned to third-party xml tools that enable you to upload xml compliant data to the New Income tax portal but this is for businesses and those who can afford it. Again, even that is in a sort of messy state but that is a whole another tale altogether.
One of the reasons to my mind why the forms are designed the way they are so that people go to specific cybercafes or get individual people to fill and upload it and make more money. I was told to go to a specific cybercafe and meet a certain individual and he asked for INR 500/- to do the work. While I don t have financial problems, I was more worried about my data going into the wrong hands. But I can see a very steady way to make money without doing much hard work.
Hearing Loss info.
Now because I had been both to Kamla Nehru Hospital as well as Sasoon and especially the deaf department, I saw many kids with half-formed ears. I had asked the doctors and they had shared this is due to malnutrition. We do know that women during pregnancies need more calories, more everything as they are eating for two bodies, not one. And this is large-scale, apparently more than 5 percent of population have children like this. And this number was of 2014, what is it today nobody knows. I also came to know that at least for some people like me, due to Covid they became deaf. I had asked the doctors if they knew of people who had become deaf due to Covid. They basically replied in the negative as they don t have the resources to monitor the same. The Govt. has an idea of health ID but just like Aadhar has to many serious sinister implications. Somebody had shared with me a long time back that in India systems work inspite of Govt. machinery rather than because of it. Meaning that the Government itself ties itself into several knots and then people have to be creative to try and figure a way out to help people. I found another issue while dealing with them.
Apparently, even though I have 60% hearing loss I would be given a certificate of 40% hearing loss and they call it Temporary Progressive Loss. I saw almost all the people who had come, many of them having far severe defencies than me getting the same/similar certificate. All of them got Temporary Progressive. Some of the cases were real puzzling. For e.g. I met another Agarwal who had a severe accident few months ago and there is some kind of paralysis & bone issue. The doctors have given up but even that gentleman was given Temporary Progressive. From what little I could understand, the idea is that over period if there is possibility of things becoming better then it should be given. Another gentleman suffered a case of dwarfism. Even he was given the same certificate. Think there have been orders from above so that people even having difficulties are not helped. Another point if you look in a macro sense, it presents a somewhat rosy picture. If someone were to debunk the Govt. data either from India or abroad then from GOI perspective they have an agenda even though the people who are suffering are our brothers and sisters And all of this is because I can read, write, articulate. Perhaps many of them may not even have a voice or a platform.
Even to get this temporary progressive disability certificate there is more than 4 months of running from one place to the other, 4 months of culmination of work. This I can share and tell from my experience, who knows how much else others might have suffered for the same. In my case a review will happen after 5 years, in most other cases they have given only 1 year. Of course, this does justify people s jobs and perhaps partly it may be due to that. Such are times where I really miss that I am unable to hear otherwise could have fleshed out lot more other people s sufferings.
And just so people know/understand this is happening in the heart of the city whose population easily exceeds 6 million plus and is supposed to be a progressive city. I do appreciate and understand the difficulties that the doctors are placed under.
Mum s Birthday & Social Engineering.
While I don t want to get into details, in the last couple of weeks mum s birthday was there and that had totally escaped me. I have been trying to disassociate myself from her and at times it s hard and then you don t remember and somebody makes you remember. So, on one hand guilty, and the other do not know what to do. If she were alive I would have bought a piece of cake or something. Didn t feel like it, hence donated some money to the local aged home. This way at least I hope they have some semblance of peace. All of them are of her similar age group.
The other thing that I began to observe in the earnest, fake identities have become the norm. Many people took elon musk s potrait using their own names in the handles, but even then Elon Free Speech Musk banned them. So much for free speech. Then I saw quite a few handles that have cute women as their profile picture but they are good at social engineering. This has started only a couple of weeks back and have seen quite a few handles leaving Twitter and joining Mastodon. Also, have been hearing that many admins of Mastodon pods are unable to get on top of this. Also, lot of people complaining as it isn t user-friendly UI as twitter is. Do they not realize that Twitter has its own IP and any competing network can t copy or infringe on their product. Otherwise, they will be sued like how Ford was & potentially win. I am not really gonna talk much about it as the blog post has become quite long and that needs its own post to do any sort of justice to it. Till later people
If you ve done anything in the Kubernetes space in recent years, you ve most likely come across the words Service Mesh . It s backed by a set of mature technologies that provides cross-cutting networking, security, infrastructure capabilities to be used by workloads running in Kubernetes in a manner that is transparent to the actual workload. This abstraction enables application developers to not worry about building in otherwise sophisticated capabilities for networking, routing, circuit-breaking and security, and simply rely on the services offered by the service mesh.In this post, I ll be covering Linkerd, which is an alternative to Istio. It has gone through a significant re-write when it transitioned from the JVM to a Go-based Control Plane and a Rust-based Data Plane a few years back and is now a part of the CNCF and is backed by Buoyant. It has proven itself widely for use in production workloads and has a healthy community and release cadence.It achieves this with a side-car container that communicates with a Linkerd control plane that allows central management of policy, telemetry, mutual TLS, traffic routing, shaping, retries, load balancing, circuit-breaking and other cross-cutting concerns before the traffic hits the container. This has made the task of implementing the application services much simpler as it is managed by container orchestrator and service mesh. I covered Istio in a prior post a few years back, and much of the content is still applicable for this post, if you d like to have a look.Here are the broad architectural components of Linkerd:The components are separated into the control plane and the data plane.The control plane components live in its own namespace and consists of a controller that the Linkerd CLI interacts with via the Kubernetes API. The destination service is used for service discovery, TLS identity, policy on access control for inter-service communication and service profile information on routing, retries, timeouts. The identity service acts as the Certificate Authority which responds to Certificate Signing Requests (CSRs) from proxies for initialization and for service-to-service encrypted traffic. The proxy injector is an admission webhook that injects the Linkerd proxy side car and the init container automatically into a pod when the linkerd.io/inject: enabled is available on the namespace or workload.On the data plane side are two components. First, the init container, which is responsible for automatically forwarding incoming and outgoing traffic through the Linkerd proxy via iptables rules. Second, the Linkerd proxy, which is a lightweight micro-proxy written in Rust, is the data plane itself.I will be walking you through the setup of Linkerd (2.12.2 at the time of writing) on a Kubernetes cluster.Let s see what s running on the cluster currently. This assumes you have a cluster running and kubectl is installed and available on the PATH.
On most systems, this should be sufficient to setup the CLI. You may need to restart your terminal to load the updated paths. If you have a non-standard configuration and linkerd is not found after the installation, add the following to your PATH to be able to find the cli:
export PATH=$PATH:~/.linkerd2/bin/
At this point, checking the version would give you the following:
$ linkerd version Client version: stable-2.12.2 Server version: unavailable
Setting up Linkerd Control PlaneBefore installing Linkerd on the cluster, run the following step to check the cluster for pre-requisites:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
pre-kubernetes-setup -------------------- control plane namespace does not already exist can create non-namespaced resources can create ServiceAccounts can create Services can create Deployments can create CronJobs can create ConfigMaps can create Secrets can read Secrets can read extension-apiserver-authentication configmap no clock skew detected
linkerd-version --------------- can determine the latest version cli is up-to-date
Status check results are
All the pre-requisites appear to be good right now, and so installation can proceed.The first step of the installation is to setup the Custom Resource Definitions (CRDs) that Linkerd requires. The linkerd cli only prints the resource YAMLs to standard output and does not create them directly in Kubernetes, so you would need to pipe the output to kubectl apply to create the resources in the cluster that you re working with.
$ linkerd install --crds kubectl apply -f - Rendering Linkerd CRDs... Next, run linkerd install kubectl apply -f - to install the control plane.
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
Next, install the Linkerd control plane components in the same manner, this time without the crds switch:
$ linkerd install kubectl apply -f - namespace/linkerd created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created serviceaccount/linkerd-identity created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created serviceaccount/linkerd-destination created secret/linkerd-sp-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created secret/linkerd-policy-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created clusterrole.rbac.authorization.k8s.io/linkerd-policy created clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created role.rbac.authorization.k8s.io/linkerd-heartbeat created rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created serviceaccount/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created serviceaccount/linkerd-proxy-injector created secret/linkerd-proxy-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created configmap/linkerd-config created secret/linkerd-identity-issuer created configmap/linkerd-identity-trust-roots created service/linkerd-identity created service/linkerd-identity-headless created deployment.apps/linkerd-identity created service/linkerd-dst created service/linkerd-dst-headless created service/linkerd-sp-validator created service/linkerd-policy created service/linkerd-policy-validator created deployment.apps/linkerd-destination created cronjob.batch/linkerd-heartbeat created deployment.apps/linkerd-proxy-injector created service/linkerd-proxy-injector created secret/linkerd-config-overrides created
Kubernetes will start spinning up the data plane components and you should see the following when you list the pods:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
linkerd-existence ----------------- 'linkerd-config' config map exists heartbeat ServiceAccount exist control plane replica sets are ready no unschedulable pods control plane pods are ready cluster networks contains all pods cluster networks contains all services
linkerd-config -------------- control plane Namespace exists control plane ClusterRoles exist control plane ClusterRoleBindings exist control plane ServiceAccounts exist control plane CustomResourceDefinitions exist control plane MutatingWebhookConfigurations exist control plane ValidatingWebhookConfigurations exist proxy-init container runs as root user if docker container runtime is used
linkerd-identity ---------------- certificate config is valid trust anchors are using supported crypto algorithm trust anchors are within their validity period trust anchors are valid for at least 60 days issuer cert is using supported crypto algorithm issuer cert is within its validity period issuer cert is valid for at least 60 days issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls ------------------------------- proxy-injector webhook has valid cert proxy-injector cert is valid for at least 60 days sp-validator webhook has valid cert sp-validator cert is valid for at least 60 days policy-validator webhook has valid cert policy-validator cert is valid for at least 60 days
linkerd-version --------------- can determine the latest version cli is up-to-date
control-plane-version --------------------- can retrieve the control plane version control plane is up-to-date control plane and cli versions match
linkerd-control-plane-proxy --------------------------- control plane proxies are healthy control plane proxies are up-to-date control plane proxies and cli versions match
Status check results are
Everything looks good.Setting up the Viz ExtensionAt this point, the required components for the service mesh are setup, but let s also install the viz extension, which provides a good visualization capabilities that will come in handy subsequently. Once again, linkerd uses the same pattern for installing the extension.
$ linkerd viz install kubectl apply -f - namespace/linkerd-viz created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created serviceaccount/metrics-api created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created serviceaccount/prometheus created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created serviceaccount/tap created rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created secret/tap-k8s-tls created apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created role.rbac.authorization.k8s.io/web created rolebinding.rbac.authorization.k8s.io/web created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created serviceaccount/web created server.policy.linkerd.io/admin created authorizationpolicy.policy.linkerd.io/admin created networkauthentication.policy.linkerd.io/kubelet created server.policy.linkerd.io/proxy-admin created authorizationpolicy.policy.linkerd.io/proxy-admin created service/metrics-api created deployment.apps/metrics-api created server.policy.linkerd.io/metrics-api created authorizationpolicy.policy.linkerd.io/metrics-api created meshtlsauthentication.policy.linkerd.io/metrics-api-web created configmap/prometheus-config created service/prometheus created deployment.apps/prometheus created service/tap created deployment.apps/tap created server.policy.linkerd.io/tap-api created authorizationpolicy.policy.linkerd.io/tap created clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created serviceaccount/tap-injector created secret/tap-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created service/tap-injector created deployment.apps/tap-injector created server.policy.linkerd.io/tap-injector-webhook created authorizationpolicy.policy.linkerd.io/tap-injector created networkauthentication.policy.linkerd.io/kube-api-server created service/web created deployment.apps/web created serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created
A few seconds later, you should see the following in your pod list:
The viz components live in the linkerd-viz namespace.You can now checkout the viz dashboard:
$ linkerd viz dashboard Linkerd dashboard available at: http://localhost:50750 Grafana dashboard available at: http://localhost:50750/grafana Opening Linkerd dashboard in the default browser Opening in existing browser session.
The Meshed column indicates the workload that is currently integrated with the Linkerd control plane. As you can see, there are no application deployments right now that are running.Injecting the Linkerd Data Plane componentsThere are two ways to integrate Linkerd to the application containers:1 by manually injecting the Linkerd data plane components 2 by instructing Kubernetes to automatically inject the data plane componentsInject Linkerd data plane manuallyLet s try the first option. Below is a simple nginx-app that I will deploy into the cluster:
Back in the viz dashboard, I do see the workload deployed, but it isn t currently communicating with the Linkerd control plane, and so doesn t show any metrics, and the Meshed count is 0:Looking at the Pod s deployment YAML, I can see that it only includes the nginx container:
Let s directly inject the linkerd data plane into this running container. We do this by retrieving the YAML of the deployment, piping it to linkerd cli to inject the necessary components and then piping to kubectl apply the changed resources.
Back in the viz dashboard, the workload now is integrated into Linkerd control plane.Looking at the updated Pod definition, we see a number of changes that the linkerd has injected that allows it to integrate with the control plane. Let s have a look:
At this point, the necessary components are setup for you to explore Linkerd further. You can also try out the jaeger and multicluster extensions, similar to the process of installing and using the viz extension and try out their capabilities.Inject Linkerd data plane automaticallyIn this approach, we shall we how to instruct Kubernetes to automatically inject the Linkerd data plane to workloads at deployment time.We can achieve this by adding the linkerd.io/inject annotation to the deployment descriptor which causes the proxy injector admission hook to execute and inject linkerd data plane components automatically at the time of deployment.
This annotation can also be specified at the namespace level to affect all the workloads within the namespace. Note that any resources created before the annotation was added to the namespace will require a rollout restart to trigger the injection of the Linkerd components.Uninstalling LinkerdNow that we have walked through the installation and setup process of Linkerd, let s also cover how to remove it from the infrastructure and go back to the state prior to its installation.The first step would be to remove extensions, such as viz.